Lab 05 - Docker Swarm

Objectives

  • Get familiar with Play with Docker
  • Understand what a Docker Swarm cluster is and how it works
  • Learn how to deploy service stacks using Docker Swarm
  • Get familiar with the basics of fault tolerance

Contents

Introduction

In this lab, we will make the transition from Docker Compose to Docker Swarm, the service orchestrator provided by Docker. Its role is to manage Docker services on one or more machines in a network (i.e., a cluster) of physical and/or virtual machines. Unlike Docker Compose, which runs containers on a single host, Docker Swarm runs services between multiple hosts. Like Compose, Docker Swarm uses YAML configuration files.

Docker Swarm architecture

In the image below (taken from the official documentation), you can see the architecture of a Docker Swarm cluster.

Host machines that are part of a Swarm are called nodes and can have two roles:

  • manager - administrative and functional role; maintains cluster consistency, launches services, exposes network endpoints
  • worker - functional role; runs services.

Of all the manager nodes, only one node is the leader, which has the role of creating tasks and logging. The tasks are then distributed to the other nodes.

There must always be exactly one leader node.

Creating a Docker Swarm

Once we have a cluster of machines running Docker, we can initialise a Docker Swarm. Thus, we can run the following command on the node that will be the leader (the option --advertise-addr is required when the node has several network interfaces and it must be specified which of them is advertised):

$ docker swarm init --advertise-addr 192.168.99.100
 
Swarm initialized: current node (qtyx0t5z275wp46wibcznx8g5) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-4hd41nyin8kn1wx4bscnnt3e98xtlvyxw578qwxijw65jp1a3q-32rl6525xriofd5xmv0c1k5vj 192.168.99.100:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

As you can see, the above command generates two more commands that we can use to add other nodes into the cluster as workers or as managers.

We can verify that the swarm was created successfully by running the command below on the leader (where we have two nodes called node1 and node2, the former being the leader, and the latter being the worker):

$ docker node ls
 
ID                            HOSTNAME     STATUS       AVAILABILITY      MANAGER STATUS
qtyx0t5z275wp46wibcznx8g5 *   node1        Ready        Active            Leader
0xbb9al1kuvn0jcapxiqni29z     node2        Ready        Active      

Docker Swarm services and service stacks

When we talk about deploying an application in Docker Swarm, we move from the notion of container to the notion of service. A Docker service is a collection of (one or more) tasks, and a task is a container. Therefore, a service consists of one or more identical containers. The service monitors the lifecycle of the containers, always trying to maintain the condition of the containers provided in the configuration. In other words, a service is a set of containers with orchestration.

Furthermore, a service stack represents several such services grouped in the same namespace. We can view a service stack as a multi-service Docker application. The easiest way to define a service stack is through a Docker Compose file, as we saw in lab 4. The behaviour of services in a stack is similar to that of Docker Compose containers, except that the naming policy is different.

Any entity created in a stack (service, volume, network, etc.) will be prefixed by STACK-NAME_.

Docker Swarm has access to a new collection of options in the Compose YAML file, which will be specified in the deploy attribute of a service. Below, you can see a snippet of a Docker Compose file showing some of these new options:

[...]
services:
  web:
    image: myimage
    deploy:
      replicas: 4
      resources:
        limits:
          cpus: "0.2"
          memory: 50M
      restart_policy:
        condition: on-failure
[...]

In the YAML file fragment above, we run a service called web, which has four copies. Thus, there will be four different containers running the myimage image, each of which can respond to requests for the web service, depending on the load. Also, each instance is limited to 20% CPU (on all cores) and 50 MB of RAM. Last but not least, a container of the web service is restarted as soon as it encounters an error (the ultimate goal is to have 4 copies of the container on the network at any time).

Swarm networks

Unlike classic Docker and Docker Compose, networks created in Swarm no longer use the bridge driver, but the overlay driver. An overlay network is a network that spans all nodes in a swarm. For this reason, the exposed public ports will be unique per network. Therefore, two 3000 ports from two different services that connect to the same overlay network cannot be exposed.

Docker Swarm balances load at network level.

A service that has been deployed to a particular port will always have that port reserved, no matter which node the container or containers actually run on. The diagram below (taken from the official Docker documentation) shows a situation where we have a service called my-web published on port 8080 in a three-node cluster. It can be seen that if we connect to port 8080 from any node IP address in the cluster, we will be redirected to a container running the 8080 external port-specific service, regardless of the node it is running on.

Differences between Docker Swarm and Docker Compose

There are several key differences between Docker Swarm and Compose in YAML configuration files:

  • because Swarm is running services over the network, the build keyword cannot be used; services must be run based on images that already exist in a registry
  • service stacks do not support .env files (unlike Docker Compose)
  • Docker Compose runs single-host containers, while Docker Swarm orchestrates multi-host services.

Starting a service stack in Docker Swarm

Once the Docker swarm has been created and initialised, the command to deploy a service stack is as follows (where the configuration is in the file my_stack.yml and the stack's name will be lab5):

$ docker stack deploy -c my_stack.yml lab5

Once a service stack has been started, we can see its status by the following command:

$ docker stack ps lab5                                                                                                      
 
ID             NAME                   IMAGE                               NODE      DESIRED STATE    CURRENT STATE           ERROR               PORTS
cuktma92gm62   lab5_adminer.1         adminer:latest                      node2     Running          Running 9 minutes ago                       
njak2qzaobtt   lab5_db.1              postgres:latest                     node1     Running          Running 8 minutes ago                       
m811buil7e63   lab5_backend.1         mobylab/backend:ia1                 node1     Running          Running 9 minutes ago                       
jnfw37e34kz3   lab5_backend.2         mobylab/backend:ia1                 node2     Running          Running 9 minutes ago                       
lkmy60wpy0gv   lab5_visualizer.1      dockersamples/visualizer:latest     node1     Running          Running 9 minutes ago
num87yijgxrg   lab5_backend.3         mobylab/backend:ia1                 node2     Running          Running 9 minutes ago

We can see the list of running service stacks using the following command:

$ docker stack ls                                                                                                           
 
NAME      SERVICES      ORCHESTRATOR
lab5      4             Swarm

Furthermore, we can list all the services from all running stacks using the following command:

$ docker service ls                                                                                                         
 
ID               NAME                 MODE           REPLICAS               IMAGE                                  PORTS
dekzzyais8g7     lab5_adminer         replicated     1/1                    adminer:latest                         *:8080->8080/tcp
ns9mxet1rkx5     lab5_db              replicated     1/1                    postgres:latest                             
dh3sv3q74fy6     lab5_backend         replicated     3/3 (max 2 per node)   mobylab/backend:ia1                    *:5555->80/tcp
ru0rd7g2ypu8     lab5_visualizer      replicated     1/1                    dockersamples/visualizer:latest        *:8081->8080/tcp

Useful commands

Service interaction

These commands can only be run on manager nodes.

$ docker service create --name <SERVICE_NAME> <DOCKER_IMAGE>   # creates a service based on an image
$ docker service ls                                            # lists all running services
$ docker service inspect <SERVICE_NAME>                        # shows information about a service
$ docker service logs –f <SERVICE_NAME>                        # shows a service's logs
$ docker service ps <SERVICE_NAME>                             # shows a service's tasks and their statuses
$ docker service update --replicas <N> <SERVICE_NAME>          # updates a service by replicating its containers N times
$ docker service rm <SERVICE_NAME>                             # removes a service

Stack interaction

These commands can only be run on manager nodes.

$ docker stack deploy -c <COMPOSE_FILE> <STACK_NAME> # creates a service stack based on a Compose file
$ docker stack rm <STACK_NAME>                       # stops a service stack
$ docker stack ps <STACK_NAME>                       # lists the tasks of a running service stack
$ docker stack ls                                    # lists all running service stacks

Cluster interaction

These commands can only be run on manager nodes.

$ docker node ls                             # lists the nodes in the cluster
$ docker node promote <NODE_NAME>            # promotes a worker node to a manager
$ docker node demote <NODE_NAME>             # demotes a manager node to a worker
$ docker swarm init [--advertise-addr <IP>]  # creates a Docker Swarm cluster
$ docker swarm join --token <TOKEN> <IP>     # joins a Docker Swarm cluster

Tasks

00. [00p] Getting started

For the exercises in this lab, we will be using Play with Docker. In order to start a session, you need to log in with a Docker Hub account and then press the green “Start” button.

We will also be using the files found in this archive, which contain the definition of the service stack shown in the figure below. The services in the stack are as follows:

  • an API service called backend, with 3 replicas, running on port 5555
  • a Postgres database service called db
  • a database administration service called adminer, running on port 8080
  • a cluster visualisation service called visualizer, running on port 8081.

01. [30p] Starting a multi-node Docker Swarm cluster

For this task, we will start a Docker Swarm cluster composed of three nodes (one manager and two workers) on Play with Docker. In order to add a new node to the cluster, you need to press the “Add a new instance button” as shown in the image below.

After adding one or more nodes, you will see a list with each one on the left-hand side, together with their IP and hostname (e.g., in the image below, node1 has IP 192.168.0.28). By clicking on each node, you obtain a window containing a shell into that node.

Subtasks

  1. log into Play with Docker
  2. add three nodes
  3. on node1, initialise a Docker Swarm cluster with the docker swarm init --advertise-addr <IP> command (you can use the IP address shown in the node description on the left-hand size of the Play with Docker page)
  4. add node2 and node3 to the cluster by running the command shown in the output of the cluster creation command (it looks like docker swarm join --token […])
  5. verify that the cluster was created successfully by running the docker node ls command on node1 (you should see all three nodes in the output)

What to upload

  • a text file with the commands given on each node
  • a print screen with the output of the “docker node ls” command on node1

02. [40p] Deploying and testing a service stack in a multi-node Docker Swarm cluster

For this exercise, we will deploy a four-service stack, which is defined in the docker-compose-swarm.yml file from the lab archive.

Subtasks

  1. on node1, download the lab archive using the following command: wget https://ocw.cs.pub.ro/courses/_media/ii/labs/s2/05/tasks/lab5.zip
  2. on node1, unzip the lab archive with the following command: unzip lab5.zip
  3. on node1, deploy the service stack with the following command: docker stack deploy -c docker-compose-swarm.yml lab5
  4. check that all services have started with the docker stack ps lab5 command (you should see all services in the “Running” state; if not, run the command until all services have started)
  5. check how your services have been scheduled by accessing the visualizer service; this is done by clicking on the blue 8081 button on the Play with Docker page; are all the services there?

What to upload

  • a text file with the commands given on each node
  • a print screen with the output of the docker stack ps lab5 command
  • a print screen with the visualizer page

03. [30p] Handling failures in a multi-node Docker Swarm cluster

In this exercise, we will see what happens when one of the worker nodes crashes.

Subtasks

  1. stop node2 by clicking on it in the left-hand side of the Play with Docker page, and then clicking the orange Delete button
  2. quickly go back to the visualizer page and see what happens; what do you see? what happens to node2? what happens to the services that were running on node2?

What to upload

  • a print screen with the visualizer page after having stopped node2

04. [00p] Stopping a multi-node Docker Swarm cluster

Stop the Play with Docker cluster by clicking the orange Close session button.

05. [10p] Feedback

Please take a minute to fill in the feedback form for this lab.

ii/labs/s2/05.txt · Last modified: 2022/05/31 15:49 by radu.ciobanu
CC Attribution-Share Alike 3.0 Unported
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0