student
vm-1
, vm-2
)student@scgc:~$ cd scgc/ student@scgc:~/scgc$ wget --user=<username> --ask-password http://repository.grid.pub.ro/cs/scgc/laboratoare/lab-04.zip student@scgc:~/scgc$ unzip lab-04.zip
qcow2
format) should be present, as well as two scripts (lab04-start
and lab04-stop
)lab04-start
script:student@scgc:~/scgc$ sh lab04-start
student@scgc:~/scgc$ ssh student@10.0.0.X
student
and root
users is student
ssh
is not working the first time. Just check if the VMs are running issuing a ping
:
$ ping 10.0.0.1
LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel. The support is integrated in the mainline kernel starting with version 2.6.29. That means any Linux kernel, starting with that version, if properly configured can support LXC containers.
Some key aspects related to LXC:
hardware node
Start by connecting to vm-1
:
student@scgc:~/scgc$ ssh student@10.0.0.1
You can use sudo su
to switch user to root
after connecting.
Using the lxc-checkconfig
command, check if the hardware node's kernel supports LXC:
root@vm-1:~# lxc-checkconfig
Also, verify if that the cgroup
filesystem is mounted:
root@vm-1:~# mount
The lxc-create
tool is used in order to facilitate the creation of a LXC container. Upon issuing a lxc-create
command the following actions are made:
The command syntax is the following:
lxc-create -n NAME -t TEMPLATE
Some of the values for TEMPLATE
are: alpine
, ubuntu
, busybox
, sshd
, debian
, or fedora
and specifies the template script that will be employed when creating the rootfs
. All the available template scripts are available in the following location: /usr/share/lxc/templates/
.
Create a new container using the alpine
template with the ct1
.
You can inspect the configuration file for this new container: /var/lib/lxc/ct1/config
.
root@vm-1:~# lxc-create -n ct1 -t alpine root@vm-1:~# lxc-ls root@vm-1:~# cat /var/lib/lxc/ct1/config
To see that the 'ct1' container has been created created, on vm-1
run:
root@vm-1:~# lxc-ls
ct1
Start the container by issuing the following command:
root@vm-1:~# lxc-start -n ct1 -F OpenRC 0.42.1.d76962aa23 is starting up Linux 4.19.0-8-amd64 (x86_64) [LXC] * /proc is already mounted * /run/openrc: creating directory * /run/lock: creating directory * /run/lock: correcting owner * Caching service dependencies ... [ ok ] * Creating user login records ... [ ok ] * Wiping /tmp directory ... [ ok ] * Starting busybox syslog ... [ ok ] * Starting busybox crond ... [ ok ] * Starting networking ... * eth0 ...ip: ioctl 0x8913 failed: No such device [ !! ] * ERROR: networking failed to start Welcome to Alpine Linux 3.11 Kernel 4.19.0-8-amd64 on an x86_64 (/dev/console) ct1 login:
Using the -F, –foreground
option, the container is started in foreground thus we can observe that the terminal is attached to it.
We can now login in the container as the user root
(the password is not set).
In order to stop the container and exit its terminal, we can issue halt
from within it just as on any other Linux machine:
ct1:~# halt ct1:~# * Stopping busybox crond ... [ ok ] * Stopping busybox syslog ... [ ok ] The system is going down NOW! Sent SIGTERM to all processes Sent SIGKILL to all processes Requesting system halt root@vm-1:~#
By adding the -d, –daemon
argument to the lxc-start
command, the container can be started in background:
root@vm-1:~# lxc-start -n ct1 -d
Verify the container state using lxc-info
:
root@vm-1:~# lxc-info -n ct1 Name: ct1 State: RUNNING PID: 1977 CPU use: 0.32 seconds BlkIO use: 4.00 KiB Memory use: 1.42 MiB KMem use: 1.01 MiB
Finally, we can connect to the container's console using lxc-console
:
root@vm-1:~# lxc-console -n ct1 Connected to tty 1 Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself Welcome to Alpine Linux 3.11 Kernel 4.19.0-8-amd64 on an x86_64 (/dev/tty1) ct1 login:
We can disconnect from the container's console, without stopping it, using the CTRL+A, Q key combination.
Using the lxc-info
command, find out the ct1
PID which will correspond to the container's init process. Any other process running in the container will be child processes of this init
.
root@vm-1:~# lxc-info -n ct1 Name: ct1 State: RUNNING PID: 1977 CPU use: 0.33 seconds BlkIO use: 4.00 KiB Memory use: 1.66 MiB KMem use: 1.06 MiB
From one terminal, connect to the ct1
console:
root@vm-1:~# lxc-console -n ct1
From other terminal in the vm-1
, print the process hierarchy starting with the container's PID:
# Install pstree root@vm-1:~# apt update root@vm-1:~# apt install psmisc root@vm-1:~# pstree --ascii -s -c -p 1977 systemd(1)---lxc-start(1974)---init(1977)-+-crond(2250) |-getty(2285) |-getty(2286) |-getty(2287) |-getty(2288) |-login(2284)---ash(2297) `-syslogd(2222)
As shown above, the init process of ct1
is a child process of lxc-start
.
Now, print the container processes from within ct1
:
ct1:~# ps -ef PID USER TIME COMMAND 1 root 0:00 /sbin/init 246 root 0:00 /sbin/syslogd -t 274 root 0:00 /usr/sbin/crond -c /etc/crontabs 307 root 0:00 /bin/login -- root 308 root 0:00 /sbin/getty 38400 tty2 309 root 0:00 /sbin/getty 38400 tty3 310 root 0:00 /sbin/getty 38400 tty4 311 root 0:00 /sbin/getty 38400 console 312 root 0:00 -ash 314 root 0:00 ps -ef
Even though the same processes can be observed from within or outside of the container, the process PIDs are different. This is because the operating system translates the process space for each container.
LXC containers have their filesystem stored in the host machine under the following path: /var/lib/lxc/<container-name>/rootfs/
.
Using this facility, files can be shared easily between containers and the host:
/root
directory.vm-1
, access the previously created file and edit it.By default, our LXC containers are not connected to the exterior.
In order customize the network configuration, we will use the default bridge docker0
to serve both containers.
In order to set the custom network in the container up, change the configuration file for ct1
so that it will match the following listing:
# virtual ethernet - level 2 virtualization lxc.net.0.type = veth # bring the interface up at container creation lxc.net.0.flags = up # connect the container to the bridge br0 from the host lxc.net.0.link = docker0 # interface name as seen in the container lxc.net.0.name = eth0 # interface name as seen in the host system lxc.net.0.veth.pair = lxc-veth-ct1
Create a new container called ct2
and make the same changes its config file changing only the lxc.net.0.veth.pair
attribute to lxc-veth-ct2
.
Start both containers in the background and then check the bridge state:
root@vm-1:~# brctl show docker0 bridge name bridge id STP enabled interfaces docker0 8000.0242c37f8020 no lxc-veth-ct1 lxc-veth-ct2
Configure the following interfaces using a 172.17.0.0/24 network space:
eth0
interface from ct1
- 172.17.0.11eth0
interface from ct2
- 172.17.0.12Test the connectivity between the host system and the containers:
root@vm-1:~# ping -c 1 172.17.0.11
And also between the containers:
ct1:~# ping -c 1 172.17.0.12
Configure NAT and enable routing on the host system so that from within the containers we have access to the internet:
root@vm-1:~# echo 1 > /proc/sys/net/ipv4/ip_forward # if the default iptables rules on docker0 do not allow NAT functionality already root@vm-1:~# iptables -t nat -A POSTROUTING -o ens3 -s 172.17.0.0/24 -j MASQUERADE
Add the default route on both containers and test the internet connectivity:
ct1:~# ip route add default via 172.17.0.1 ct1:~# ping -c 1 www.google.com
www.google.com
, but is able to ping 1.1.1.1
, a valid DNS resolver was likely not configured.
To fix this, run echo "nameserver 1.1.1.1" > /etc/resolv.conf
to use Cloudflare's DNS resolver.
LXD is a next generation system container manager. It offers a user experience similar to virtual machines but using Linux containers instead. System containers are designed to run multiple processes and services and for all practical purposes. You can think of OS containers as VMs, where you can run multiple processes, install packages etc.
LXD has it's image based on pre-made images available for a wide number of Linux distributions and is built around a very powerful, yet pretty simple, REST API.
Let's start by installing LXD on vm-1
using snap and setup the PATH
variable so we can use it easily:
root@vm-1:~# apt install snapd root@vm-1:~# snap install --channel=2.0/stable lxd root@vm-1:~# export PATH="$PATH:/snap/bin"
The LXD initialization process is can be started using lxd init
:
root@vm-1:~# lxd init
You will be prompted to specify details about the storage backend for the LXD containers and also networking options:
root@vm-1:~# lxd init Do you want to configure the LXD bridge (yes/no) [default=yes]? # press Enter What should the new bridge be called [default=lxdbr0]? # press Enter What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 12.0.0.1/24 Would you like LXD to NAT IPv4 traffic on your bridge? [default=yes]? # press Enter What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? # press Enter Do you want to configure a new storage pool (yes/no) [default=yes]? # press Enter Name of the storage backend to use (dir or zfs) [default=dir]: # press Enter Would you like LXD to be available over the network (yes/no) [default=no]? yes Address to bind LXD to (not including port) [default=all]: 10.0.0.1 Port to bind LXD to [default=8443]: # press Enter Trust password for new clients: # enter a password and press Enter Again: # re-enter the same password and press enter LXD has been successfully configured.
We have now successfully configured LXD storage backend and also networking. We can verify that lxdbr0
was properly configured with the given subnet:
root@vm-1:~# brctl show lxdbr0 bridge name bridge id STP enabled interfaces lxdbr0 8000.000000000000 no root@vm-1:~# ip address show lxdbr0 13: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether c6:fd:e7:04:9c:de brd ff:ff:ff:ff:ff:ff inet 12.0.0.1/24 scope global lxdbr0 valid_lft forever preferred_lft forever inet6 fd42:89a:615d:8d24::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::c4fd:e7ff:fe04:9cde/64 scope link valid_lft forever preferred_lft forever
Use lxc list
to show the available LXD containers on the host system:
root@vm-1:~# lxc list Generating a client certificate. This may take a minute... +------+-------+------+------+------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+-------+------+------+------+-----------+
This is the first time the lxc
client tool communicates with the lxd
daemon and let's the user know that it automatically generates a client certificate for secure connections with the back-end. Finally, the command outputs a list of available containers, which is empty at the moment since we did not create any yet.
lxc
tool is part of the lxd
package, not the lxc
one. It will only communicate with the lxd
daemon, and will therefore not show any information about containers previously created.
LXD uses multiple remote image servers. To list the default remotes we can use lxc remote
:
root@vm-1:~# lxc remote list +-----------------+------------------------------------------+---------------+--------+--------+ | NAME | URL | PROTOCOL | PUBLIC | STATIC | +-----------------+------------------------------------------+---------------+--------+--------+ | images | https://images.linuxcontainers.org | simplestreams | YES | NO | +-----------------+------------------------------------------+---------------+--------+--------+ | local (default) | unix:// | lxd | NO | YES | +-----------------+------------------------------------------+---------------+--------+--------+ | ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | YES | YES | +-----------------+------------------------------------------+---------------+--------+--------+ | ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | YES | YES | +-----------------+------------------------------------------+---------------+--------+-------
LXD comes with 3 default remotes providing images:
We can list the available images on a specific remote using lxc image list
. In the below example, we list all the images from the ubuntu
stable remote matching version 20.04
:
root@vm-1:~# lxc image list ubuntu: 20.04 +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | f (5 more) | 647a85725003 | yes | ubuntu 20.04 LTS amd64 (release) (20200504) | x86_64 | 345.73MB | May 4, 2020 at 12:00am (UTC) | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | f/arm64 (2 more) | 9cb323cab3f4 | yes | ubuntu 20.04 LTS arm64 (release) (20200504) | aarch64 | 318.86MB | May 4, 2020 at 12:00am (UTC) | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | f/armhf (2 more) | 25b0b3d1edf9 | yes | ubuntu 20.04 LTS armhf (release) (20200504) | armv7l | 301.15MB | May 4, 2020 at 12:00am (UTC) | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | f/ppc64el (2 more) | 63ff040bb12b | yes | ubuntu 20.04 LTS ppc64el (release) (20200504) | ppc64le | 347.49MB | May 4, 2020 at 12:00am (UTC) | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | f/s390x (2 more) | d7868570a060 | yes | ubuntu 20.04 LTS s390x (release) (20200504) | s390x | 315.86MB | May 4, 2020 at 12:00am (UTC) | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+
As we can see, there are available images for multiple architectures including armhf, arm64, powerpc, amd64 etc. Since LXD containers are sharing the kernel with the host system and there is also, there is no emulation support in containers, we need to choose the image matching the host architecture, in this case x86_64
(amd64
).
Now that we have chosen the container image, let's start a container named lxd-ct
:
root@vm-1:~# lxc launch ubuntu:f lxd-ct
The f
(extracted from the ALIAS
column) in ubuntu:f
is the shortcut for focal ubuntu
(version 20.04 is codenamed Focal Fossa). ubuntu:
is the remote that we want to download the image from. Because this is the first time we launch a container using this image, it will take a while to download the rootfs for the container on the host.
root@vm-1:~# lxc launch images:alpine/3.11 lxd-ct
Running lxc list
we can see that now we have a container running:
root@vm-1:~# lxc list +--------+---------+------------------+----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +--------+---------+------------------+----------------------------------------------+------------+-----------+ | lxd-ct | RUNNING | 12.0.0.93 (eth0) | fd42:89a:615d:8d24:216:3eff:fea6:92f2 (eth0) | PERSISTENT | 0 | +--------+---------+------------------+----------------------------------------------+------------+-----------
Let's connect to the lxd-ct
as the preconfigured user ubuntu
:
root@vm-1:~# lxc exec lxd-ct -- sudo --login --user ubuntu
root@vm-1:~# lxc exec lxd-ct -- /bin/sh
The first –
from the command specifies that the lxc exec
command options stop there and everything that follows are the commands that need to be run in the container. In this case, we want to login as the ubuntu
user to the system.
Now we can check all the processes running in the container:
ubuntu@lxd-ct:~$ ps aux
As we can see from the output of ps
, the LXD container runs the systemd
init subsystem and not just the bash
session as we saw in LXC containers.
To quit the container shell, a simple CTRL - D
is enough. As a final step, let's stop our LXD container:
root@vm-1:~# lxc stop lxd-ct root@vm-1:~# lxc list +--------+---------+------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +--------+---------+------+------+------------+-----------+ | lxd-ct | STOPPED | | | PERSISTENT | 0 | +--------+---------+------+------+------------+-----------+
While system containers are designed to run multiple processes and services, application containers
such as Docker are designed to package and run a single service. Docker also uses image servers for hosting already built images. Check out Docker Hub for both official images and also user uploaded ones.
The KVM machine vm-2
already has Docker installed. Let's start by logging into the KVM machine:
student@scgc:~/scgc$ ssh student@10.0.0.2
Let's check if any Docker container is running on this host or if there is any container images on the system:
root@vm-2:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE root@vm-2:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Since there are no containers, let's search for the alpine
image, one of the smallest Linux Distributions, on the Docker Hub:
root@vm-2:~# docker search alpine NAME DESCRIPTION STARS OFFICIAL AUTOMATED alpine A minimal Docker image based on Alpine Linux… 6418 [OK] ...
The command will output any image that has alpine
in it's name. The first one is the official alpine Docker image. Let's use it to start a container interactively:
root@vm-2:~# docker run -it alpine /bin/sh / # / # ps aux PID USER TIME COMMAND 1 root 0:01 /bin/sh 5 root 0:00 ps aux / #
In a fashion similar to LXD containers, Docker will first download the image locally and then start a container in which the only process is the /bin/sh
invoked terminal. We can exit the container with CTRL - D
.
If we check the container list, we can see the previously created container and its ID (f3b608d7cc4c) and name (vigorous_varahamihira):
root@vm-2:~# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f3b608d7cc4c alpine "/bin/sh" 5 minutes ago Exited (0) 11 seconds ago vigorous_varahamihira
We can always reuse the same container, start it again and then attaching to its terminal or simply running a command in it. Below you can see some of the possible commands:
root@vm-2:~# docker start vigorous_varahamihira # start the container with the name vigorous_varahamihira vigorous_varahamihira root@vm-2:~# docker exec vigorous_varahamihira cat /etc/os-release # run a command inside the container NAME="Alpine Linux" ID=alpine VERSION_ID=3.11.6 PRETTY_NAME="Alpine Linux v3.11" HOME_URL="https://alpinelinux.org/" BUG_REPORT_URL="https://bugs.alpinelinux.org/"
The goal now is to dockerize and run a simple Node.js application that exposes a simple REST API. On the vm-2
node, in the /root/messageApp
path you can find the sources of the application and also a Dockerfile
.
A Dockerfile is a file that contains all the commands a user could call on the command line to assemble an image. Using the Dockerfile in conjunction with docker build
users can create automatically container images.
Lets' understand the syntax of our Dockerfile /root/messageApp/Dockerfile
:
# Use node 4.4.5 LTS FROM node:4.4.5 # Copy source code COPY . /app # Change working directory WORKDIR /app # Install dependencies RUN npm install # Expose API port to the outside EXPOSE 80 # Launch application CMD ["npm","start"]
The actions performed by the Dockerfile are the following:
FROM node:4.4.5
- starts from the node base imageCOPY . /app
- copies the sources from the vm-1
host current directory (messageApp
) to the /app
folder in the containerRUN npm install
- installs all dependenciesEXPOSE 80
- exposes port 80 to the host machineCMD [“npm”,”start”]
- launches the Node.js applicationTo build the application image we would issue a command like the following:
root@vm-2:~/messageApp# docker build -t message-app .
The -t
parameter is used to specifies the name of the new image while the dot at the end of the command specifies where to find the Dockerfile
, in our case - the currect directory.
root@vm-2:~# docker pull ioanaciornei/message-app:4
While the message-app
image is building or downloading (depending on what you chose) you can start to complete the next step.
Docker can be used in swarm mode to natively manage a cluster of Docker Hosts that can each run multiple containers. You can read more on the possibilities when using swarm on the official documentation.
Crete a new swarm following the official documentation, where vm-2
is the manager node
, and vm-1
is a cluster node. You can check the cluster organization using docker node ls
:
root@vm-2:~# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION t37w45fqp5kdlttc4gz7xibb8 vm-1 Ready Active 18.09.1 j43vq230skobhaokrc6thznt1 * vm-2 Ready Active Leader 18.09.1
In order enable connectivity between containers running on both machines, we need to create an attachable overlay network - a multi-host network from the manager node - called appnet
on vm-2
. You can check the configuration using docker network ls
and docker network inspect
.
root@vm-2:~# docker network ls NETWORK ID NAME DRIVER SCOPE mi2h6quzc7kt appnet overlay swarm 85865eb2036a bridge bridge local bff38e0689a8 docker_gwbridge bridge local 0f082dff9018 host host local to0vd04fsv8q ingress overlay swarm e8bce560fd6f none null local
The application messageApp
that is deployed in step 10 also needs a connection to a mongodb
server. For this, we will start another container on vm-1
that hosts it. We are using the official mongo
image and also connecting the new container on the overlay network:
root@vm-1:~# docker run -d --name mongo --net=appnet mongo root@vm-1:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 849b15ac13d6 mongo "docker-entrypoint.s…" 21 seconds ago Up 15 seconds 27017/tcp mongo
Now that we have a mongodb container up and running on the appnet
multi-host network, let's test the connectivity by starting a container on vm-2
:
root@vm-2:~# docker run -ti --name box --net=appnet alpine sh / # ping mongo PING mongo (10.0.1.2): 56 data bytes 64 bytes from 10.0.1.2: seq=0 ttl=64 time=1.150 ms 64 bytes from 10.0.1.2: seq=1 ttl=64 time=1.746 ms # If the name is not registered, you can find the mongo container's IPv4 address using the 'inspect' command. # We are interested in the "IPv4Address" field under Networks.appnet root@vm-2:~# docker inspect mongo ... "NetworkSettings": { ... "Networks": { "appnet": { "IPAMConfig": { "IPv4Address": "10.0.1.2" }, ... root@vm-2:~#
Finally, start a container using the message-app
image:
root@vm-2:~# docker run -d --net=appnet ioanaciornei/message-app:4
It may take a while for the container to start. You can check the logs using the docker logs
command:
root@vm-2:~# docker logs <container-id>
You can check that the Node.js application is running and that it has access to the mongo container as follows:
root@vm-2:~# curl http://172.18.0.4:1337/message root@vm-2:~# root@vm-2:~# curl -XPOST http://172.18.0.4:1337/message?text=finally-done { "text": "finally-done", "createdAt": "2020-05-06T16:25:37.477Z", "updatedAt": "2020-05-06T16:25:37.477Z", "id": "5eb2e501f9582c1100585129" } root@vm-2:~# curl http://172.18.0.4:1337/message [ { "text": "finally-done", "createdAt": "2020-05-06T16:25:37.477Z", "updatedAt": "2020-05-06T16:25:37.477Z", "id": "5eb2e501f9582c1100585129" } ]