Tasks
In this lab, we will make the transition from Docker Compose to Docker Swarm, the service orchestrator provided by Docker. Its role is to manage Docker services on one or more machines in a network (i.e., a cluster) of physical and/or virtual machines. Unlike Docker Compose, which runs containers on a single host, Docker Swarm runs services between multiple hosts. Like Compose, Docker Swarm uses YAML configuration files.
In the image below (taken from the official documentation), you can see the architecture of a Docker Swarm cluster.
Host machines that are part of a Swarm are called nodes and can have two roles:
Of all the manager nodes, only one node is the leader, which has the role of creating tasks and logging. The tasks are then distributed to the other nodes.
Once we have a cluster of machines running Docker, we can initialise a Docker Swarm. Thus, we can run the following command on the node that will be the leader (the option --advertise-addr is required when the node has several network interfaces and it must be specified which of them is advertised):
$ docker swarm init --advertise-addr 192.168.99.100 Swarm initialized: current node (qtyx0t5z275wp46wibcznx8g5) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-4hd41nyin8kn1wx4bscnnt3e98xtlvyxw578qwxijw65jp1a3q-32rl6525xriofd5xmv0c1k5vj 192.168.99.100:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
We can verify that the swarm was created successfully by running the command below on the leader (where we have two nodes called node1 and node2, the former being the leader, and the latter being the worker):
$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS qtyx0t5z275wp46wibcznx8g5 * node1 Ready Active Leader 0xbb9al1kuvn0jcapxiqni29z node2 Ready Active
When we talk about deploying an application in Docker Swarm, we move from the notion of container to the notion of service. A Docker service is a collection of (one or more) tasks, and a task is a container. Therefore, a service consists of one or more identical containers. The service monitors the lifecycle of the containers, always trying to maintain the condition of the containers provided in the configuration. In other words, a service is a set of containers with orchestration.
Furthermore, a service stack represents several such services grouped in the same namespace. We can view a service stack as a multi-service Docker application. The easiest way to define a service stack is through a Docker Compose file, as we saw in lab 4. The behaviour of services in a stack is similar to that of Docker Compose containers, except that the naming policy is different.
Docker Swarm has access to a new collection of options in the Compose YAML file, which will be specified in the deploy attribute of a service. Below, you can see a snippet of a Docker Compose file showing some of these new options:
[...] services: web: image: myimage deploy: replicas: 4 resources: limits: cpus: "0.2" memory: 50M restart_policy: condition: on-failure [...]
In the YAML file fragment above, we run a service called web, which has four copies. Thus, there will be four different containers running the myimage image, each of which can respond to requests for the web service, depending on the load. Also, each instance is limited to 20% CPU (on all cores) and 50 MB of RAM. Last but not least, a container of the web service is restarted as soon as it encounters an error (the ultimate goal is to have 4 copies of the container on the network at any time).
Unlike classic Docker and Docker Compose, networks created in Swarm no longer use the bridge driver, but the overlay driver. An overlay network is a network that spans all nodes in a swarm. For this reason, the exposed public ports will be unique per network. Therefore, two 3000 ports from two different services that connect to the same overlay network cannot be exposed.
A service that has been deployed to a particular port will always have that port reserved, no matter which node the container or containers actually run on. The diagram below (taken from the official Docker documentation) shows a situation where we have a service called my-web published on port 8080 in a three-node cluster. It can be seen that if we connect to port 8080 from any node IP address in the cluster, we will be redirected to a container running the 8080 external port-specific service, regardless of the node it is running on.
There are several key differences between Docker Swarm and Compose in YAML configuration files:
Once the Docker swarm has been created and initialised, the command to deploy a service stack is as follows (where the configuration is in the file my_stack.yml and the stack's name will be lab5):
$ docker stack deploy -c my_stack.yml lab5
Once a service stack has been started, we can see its status by the following command:
$ docker stack ps lab5 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS cuktma92gm62 lab5_adminer.1 adminer:latest node2 Running Running 9 minutes ago njak2qzaobtt lab5_db.1 postgres:latest node1 Running Running 8 minutes ago m811buil7e63 lab5_backend.1 mobylab/backend:ia1 node1 Running Running 9 minutes ago jnfw37e34kz3 lab5_backend.2 mobylab/backend:ia1 node2 Running Running 9 minutes ago lkmy60wpy0gv lab5_visualizer.1 dockersamples/visualizer:latest node1 Running Running 9 minutes ago num87yijgxrg lab5_backend.3 mobylab/backend:ia1 node2 Running Running 9 minutes ago
We can see the list of running service stacks using the following command:
$ docker stack ls NAME SERVICES ORCHESTRATOR lab5 4 Swarm
Furthermore, we can list all the services from all running stacks using the following command:
$ docker service ls ID NAME MODE REPLICAS IMAGE PORTS dekzzyais8g7 lab5_adminer replicated 1/1 adminer:latest *:8080->8080/tcp ns9mxet1rkx5 lab5_db replicated 1/1 postgres:latest dh3sv3q74fy6 lab5_backend replicated 3/3 (max 2 per node) mobylab/backend:ia1 *:5555->80/tcp ru0rd7g2ypu8 lab5_visualizer replicated 1/1 dockersamples/visualizer:latest *:8081->8080/tcp
$ docker service create --name <SERVICE_NAME> <DOCKER_IMAGE> # creates a service based on an image $ docker service ls # lists all running services $ docker service inspect <SERVICE_NAME> # shows information about a service $ docker service logs –f <SERVICE_NAME> # shows a service's logs $ docker service ps <SERVICE_NAME> # shows a service's tasks and their statuses $ docker service update --replicas <N> <SERVICE_NAME> # updates a service by replicating its containers N times $ docker service rm <SERVICE_NAME> # removes a service
$ docker stack deploy -c <COMPOSE_FILE> <STACK_NAME> # creates a service stack based on a Compose file $ docker stack rm <STACK_NAME> # stops a service stack $ docker stack ps <STACK_NAME> # lists the tasks of a running service stack $ docker stack ls # lists all running service stacks
$ docker node ls # lists the nodes in the cluster $ docker node promote <NODE_NAME> # promotes a worker node to a manager $ docker node demote <NODE_NAME> # demotes a manager node to a worker $ docker swarm init [--advertise-addr <IP>] # creates a Docker Swarm cluster $ docker swarm join --token <TOKEN> <IP> # joins a Docker Swarm cluster
For the exercises in this lab, we will be using Play with Docker. In order to start a session, you need to log in with a Docker Hub account and then press the green “Start” button.
We will also be using the files found in this archive, which contain the definition of the service stack shown in the figure below. The services in the stack are as follows:
For this task, we will start a Docker Swarm cluster composed of three nodes (one manager and two workers) on Play with Docker. In order to add a new node to the cluster, you need to press the “Add a new instance button” as shown in the image below.
After adding one or more nodes, you will see a list with each one on the left-hand side, together with their IP and hostname (e.g., in the image below, node1 has IP 192.168.0.28). By clicking on each node, you obtain a window containing a shell into that node.
For this exercise, we will deploy a four-service stack, which is defined in the docker-compose-swarm.yml file from the lab archive.
In this exercise, we will see what happens when one of the worker nodes crashes.
Stop the Play with Docker cluster by clicking the orange Close session button.
Please take a minute to fill in the feedback form for this lab.