This is an old revision of the document!
student
vm-1
, vm-2
)student@saisp:~$ wget --user=<username> --ask-password http://repository.grid.pub.ro/cs/scgc/laboratoare/lab-04.zip student@saisp:~$ unzip lab-04.zip student@saisp:~$ cd lab-04/
qcow2
format) should be present, as well as two scripts (lab04-start
and lab04-stop
)lab04-start
script:student@saisp:~/lab-04$ ./lab04-start
student@saisp:~/lab-04$ ssh root@10.0.0.1
student
and root
users is student
LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel. The support is integrated in the mainline kernel starting with version 2.6.29. That means any linux kernel, starting with that version, if properly configured can support LXC containers.
Some key aspects related to LXC:
hardware node
Start by connecting to vm-1
:
[student@saisp ~] $ ssh root@10.0.0.1
Using the lxc-checkconfig
command, check if the hardware node's kernel supports LXC:
[root@vm-1 ~] lxc-checkconfig
Also, verify if that the cgroup
filesystem is mounted:
[root@vm-1 ~] mount
The KVM virtual machine vm-1
already has a container created - ct1
:
[root@vm-1 ~] lxc-ls ct1
Start the container by issuing the following command:
[root@vm-1 ~] lxc-start -n ct1 -F OpenRC 0.24.1.a941ee4a0b is starting up Linux 4.4.0-116-generic (x86_64) [LXC] * /proc is already mounted * /run/openrc: creating directory * /run/lock: creating directory * /run/lock: correcting owner * Caching service dependencies ... [ ok ] * Creating user login records ... [ ok ] * Wiping /tmp directory ... [ ok ] * Starting busybox syslog ... [ ok ] * Starting busybox crond ... [ ok ] * Starting networking ... * eth0 ...udhcpc: started, v1.27.2 udhcpc: sending discover udhcpc: sending select for 10.0.1.38 udhcpc: lease of 10.0.1.38 obtained, lease time 3600 [ ok ] Welcome to Alpine Linux 3.7 Kernel 4.4.0-116-generic on an x86_64 (/dev/console) ct1 login:
Using the -F, –foreground
option, the container is started in foreground thus we can observe that the terminal is attached to it.
We can now login in the container as the user root
(the password is not set).
In order to stop the container and exit its terminal, we can issue halt
from within it just as on any other Linux machine:
ct1:~# halt ct1:~# * Stopping busybox crond ... [ ok ] * Stopping busybox syslog ... [ ok ] The system is going down NOW! Sent SIGTERM to all processes Sent SIGKILL to all processes Requesting system halt [root@vm-1 ~]
By adding the -d, –daemon
argument to the lxc-start
command, the container can be started in background:
[root@vm-1 ~] lxc-start -n ct1 -d
Verify the container state using lxc-info
:
[root@vm-1 ~] lxc-info -n ct1 Name: ct1 State: RUNNING PID: 2101 IP: 10.0.1.38 CPU use: 6.73 seconds BlkIO use: 8.00 KiB Memory use: 500.00 KiB KMem use: 0 bytes Link: veth7NF55U TX bytes: 1.30 KiB RX bytes: 1.48 KiB Total bytes: 2.78 KiB
Finally, we can connect to the container's console using lxc-console
:
[root@vm-1 ~] lxc-console -n ct1 Connected to tty 1 Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself Welcome to Alpine Linux 3.7 Kernel 4.4.0-116-generic on an x86_64 (/dev/tty1) ct1 login:
We can disconnect from the container's console, without stopping it, using the CTRL+A, Q key combination.
Using the lxc-info
command, find out the ct1
PID which will correspond to the container's init process. Any other process running in the container will be child processes of this init
.
[root@vm-1 ~] lxc-info -n ct1 Name: ct1 State: RUNNING PID: 2101 IP: 10.0.1.38 CPU use: 6.81 seconds BlkIO use: 8.00 KiB Memory use: 684.00 KiB KMem use: 0 bytes Link: veth7NF55U TX bytes: 1.34 KiB RX bytes: 1.52 KiB Total bytes: 2.86 KiB
From one terminal, connect to the ct1
console:
[root@vm-1 ~] lxc-console -n ct1
From other terminal in the vm-1
, print the process hierarchy starting with the container's PID:
[root@vm-1 ~] pstree --ascii -s -c -p 2101 systemd(1)---lxc-start(2091)---init(2101)-+-crond(2373) |-getty(2424) |-getty(2425) |-getty(2426) |-getty(2427) |-login(2423)---ash(2432) |-syslogd(2352) `-udhcpc(2414)
As shown above, the init process of ct1
is a child process of lxc-start
.
Now, print the container processes from within ct1
:
ct1:~# ps -ef PID USER TIME COMMAND 1 root 0:00 /sbin/init 213 root 0:00 /sbin/syslogd -Z 234 root 0:00 /usr/sbin/crond -c /etc/crontabs 275 root 0:00 udhcpc -b -p /var/run/udhcpc.eth0.pid -i eth0 -x hostname:ct1 284 root 0:00 /bin/login -- root 285 root 0:00 /sbin/getty 38400 tty2 286 root 0:00 /sbin/getty 38400 tty3 287 root 0:00 /sbin/getty 38400 tty4 288 root 0:00 /sbin/getty 38400 console 289 root 0:00 -ash 290 root 0:00 ps -ef
Even though the same processes can be observed from within or outside of the container, the process PIDs are different. This is because the operating system translates the process space for each container.
LXC containers have their filesystem stored in the host machine under the following path: /var/lib/lxc/<container-name>/rootfs/
.
Using this facility, files can be shared easily between containers and the host:
/root
directory.vm-1
, access the previously created file and edit it.
The lxc-create
tool is used in order to facilitate the creation of a LXC container. Upon issuing a lxc-create
command the following actions are made:
The command syntax is the following:
lxc-create -n NAME -t TEMPLATE
where TEMPLATE
can be one of the following alpine
, ubuntu
, busybox
, sshd
, debian
, fedora
and specifies the template script that will be employed when creating the rootfs
. All the available template scripts are available in the following location: /usr/share/lxc/templates/
.
Create a new container using the alpine
template with the ct2
name and verify its existence using lxc-ls
.
You can inspect the configuration file for this new container: /var/lib/lxc/ct2/config
.
[root@vm-1 ~] lxc-create -n ct2 -t alpine [root@vm-1 ~] lxc-ls [root@vm-1 ~] cat /var/lib/lxc/ct2/config
Then you can interact with the container:
ct2
in the backgroundlxc-console
lxc-stop
By default, the LXC containers are connected to the exterior using the lxcbr0
bridge with randomly selected IP subnets.
In order customize the network configuration, we will create a new bridge
that will serve both containers.
Create the bridge
on the host system vm-1
:
[root@vm-1 ~] brctl addbr br-lxc [root@vm-1 ~] ip link set dev br-lxc up
In order to setupthe custom network in the container, change the configuration file for ct1
so that it will match the following listing:
lxc.network.type = veth # virtual ethernet - level 2 virtualization lxc.network.flags = up # bring the interface up at container creation lxc.network.link = br-lxc # connect the container to the bridge br0 from the host lxc.network.name = eth0 # interface name as seen in the container lxc.network.veth.pair = lxc-veth0-ct1 # interface name as seen in the host system
Make the same changes to the ct2
config file changing only the lxc.network.veth.pair
attribute to lxc-veth0-ct2
.
Start both containers in the background and then check the bridge state:
[root@vm-1 ~] brctl show br-lxc bridge name bridge id STP enabled interfaces br-lxc 8000.fe59adb65324 no lxc-veth0-ct1 lxc-veth0-ct2
Configure the following interfaces using a 11.0.0.0/24 network space:
br-lxc
bridge from vm-1
- 11.0.0.11eth0
interface from ct1
- 11.0.0.1eth0
interface from ct2
- 11.0.0.2Test the connectivity between the host system and the containers:
[root@vm-1 ~] ping -c 1 11.0.0.1
And also between the containers:
ct1:~# ping -c 2 11.0.0.2
Configure NAT and enable routing on the host system so that from within the containers we have access to the internet:
[root@vm-1 ~] iptables -t nat -A POSTROUTING -o ens3 -s 11.0.0.0/24 -j MASQUERADE [root@vm-1 ~] echo 1 > /proc/sys/net/ipv4/ip_forward
Add the default route on both containers and test the internet connectivity:
ct1:~# ip route add default via 11.0.0.11 ct1:~# ping -c 1 www.google.com
LXD is a next generation system container manager. It offers a user experience similar to virtual machines but using Linux containers instead. System containers are designed to run multiple processes and services and for all practical purposes, you can think of OS containers as VMs, where you can run multiple processes, install packages etc.
LXD it's image based with pre-made images available for a wide number of Linux distributions and is built around a very powerful, yet pretty simple, REST API.
Let's start by installing LXD on vm-1
:
[root@vm-1 ~] apt-get install lxd
The LXD initialization process is can be started using lxd init
:
[root@vm-1 ~] lxd init
You will be prompted to specify details about the storage backend for the LXD containers and also networking options.
First, you will be asked if LXD should configurea new storage pool and what name it should take:
Do you want to configure a new storage pool (yes/no) [default=yes]? yes Name of the storage backend to use (dir or zfs) [default=dir]:
Next, the networking details need to be setup:
Would you like LXD to be available over the network (yes/no) [default=no]? yes Address to bind LXD to (not including port) [default=all]: 10.0.0.1 Port to bind LXD to [default=8443]: Trust password for new clients: Again: Do you want to configure the LXD bridge (yes/no) [default=yes]? yes
After this step, an ncurses interface will be brought up with further configuration options:
Would you like to setup a network bridge for LXD containers now? yes Do you want to setup an IPv4 subnet? Yes Bridge interface name: lxdbr0 IPv4 address: 12.0.0.0 IPv4 CIDR mask: 24 First DHCP address: 12.0.0.2 Last DHCP address: 12.0.0.254 Max number of DHCP clients: 252 Do you want to NAT the IPv4 traffic: Yes Do you want to setup an IPv6 subnet: No
We have now successfully configured LXD storage backend and also networking. We can verify that lxdbr0
was properly configured with the given subnet:
[root@vm-1 ~] brctl show lxdbr0 bridge name bridge id STP enabled interfaces lxdbr0 8000.000000000000 no [root@vm-1 ~] ifconfig lxdbr0 lxdbr0 Link encap:Ethernet HWaddr a2:0b:c9:16:1d:d1 inet addr:12.0.0.0 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::a00b:c9ff:fe16:1dd1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:800 (800.0 B)
Use lxc list
to show the available LXD containers on the host system:
[root@vm-1 ~] # lxc list Generating a client certificate. This may take a minute... If this is your first time using LXD, you should also run: sudo lxd init To start your first container, try: lxc launch ubuntu:16.04 +------+-------+------+------+------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+-------+------+------+------+-----------+
This is the first time the lxc
client tool communicates with the lxd
daemon and let's the user know that it automatically geenrates a client certificate for secure connections with the backend. Finally, the command outputs a list of available containers, which is empty at the moment since we did not create any yet.
LXD uses multiple remote image servers. To list the default remotes we can use lxc remote
:
[root@vm-1 ~] lxc remote list +-----------------+------------------------------------------+---------------+--------+--------+ | NAME | URL | PROTOCOL | PUBLIC | STATIC | +-----------------+------------------------------------------+---------------+--------+--------+ | images | https://images.linuxcontainers.org | simplestreams | YES | NO | +-----------------+------------------------------------------+---------------+--------+--------+ | local (default) | unix:// | lxd | NO | YES | +-----------------+------------------------------------------+---------------+--------+--------+ | ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | YES | YES | +-----------------+------------------------------------------+---------------+--------+--------+ | ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | YES | YES | +-----------------+------------------------------------------+---------------+--------+--------+
LXD comes with 3 default remotes providing images:
We can list the available images on a specific remote using lxc image list
. In the below example, we list all the images from the ubuntu
stable remote matching version 16.04
:
[root@vm-1 ~] lxc image list ubuntu: 16.04 +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | x (9 more) | c5bbef7f4e1c | yes | ubuntu 16.04 LTS amd64 (release) (20180306) | x86_64 | 156.23MB | Mar 6, 2018 at 12:00am (UTC) | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | x/arm64 (4 more) | e3656464cb5e | yes | ubuntu 16.04 LTS arm64 (release) (20180306) | aarch64 | 139.40MB | Mar 6, 2018 at 12:00am (UTC) | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | x/armhf (4 more) | 30913be27940 | yes | ubuntu 16.04 LTS armhf (release) (20180306) | armv7l | 139.25MB | Mar 6, 2018 at 12:00am (UTC) | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | x/i386 (4 more) | 39369220866d | yes | ubuntu 16.04 LTS i386 (release) (20180306) | i686 | 155.79MB | Mar 6, 2018 at 12:00am (UTC) | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | x/powerpc (4 more) | 45ade88fd54e | yes | ubuntu 16.04 LTS powerpc (release) (20180306) | ppc | 144.20MB | Mar 6, 2018 at 12:00am (UTC) | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | x/ppc64el (4 more) | 20af016273ca | yes | ubuntu 16.04 LTS ppc64el (release) (20180306) | ppc64le | 156.55MB | Mar 6, 2018 at 12:00am (UTC) | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ | x/s390x (4 more) | 2ab7d7bf0d78 | yes | ubuntu 16.04 LTS s390x (release) (20180306) | s390x | 150.22MB | Mar 6, 2018 at 12:00am (UTC) | +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+
As we can see, there are available images for multiple architectures including armhf, arm64, powerpc, amd64 etc. Since LXD containers are sharing the kernel with thehost system and there is also, there is no emulation support in containers, we need to choose the image matching the host architecture, in this case x86_64
( amd64
).
Now that we have chosen the container image, let's start a container named lxd-ct
:
[root@vm-1 ~] lxc launch ubuntu:x lxd-ct
The x
in ubuntu:x
is the shortcut for xenial ubuntu
while ubuntu:
is the remote that we want to download the image from. Because this is the first time we launch a container using this image, it will take a while to download the rootfs for the container on the host.
[root@vm-1 ~] lxc launch images:alpine/3.4 lxd-ct
Running lxc list
we can see that now we have a container running:
[root@vm-1 ~] lxc list +--------+---------+-------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +--------+---------+-------------------+------+------------+-----------+ | lxd-ct | RUNNING | 12.0.0.204 (eth0) | | PERSISTENT | 0 | +--------+---------+-------------------+------+------------+-----------+
Let's connect to the lxd-ct
as the preconfigured user ubuntu
:
[root@vm-1 ~] lxc exec lxd-ct -- sudo --login --user ubuntu
[root@vm-1 ~] lxc exec lxd-ct -- /bin/sh
The first –
from the command specifies that the lxc exec
command options stop there and everything that follows are the commands that need to be run in the container. In this case, we want to login as the ubuntu
user to the system.
Now we can check all the processes running in the container:
ubuntu@lxd-ct:~$ ps aux
As we can see from the output of ps
, the LXD container runs the systemd
init subsystem and not just the bash
session as we saw in LXC containers.
To quit the container shell, a simple CTRL - D
is enough. As a final step, let's stop our LXD container:
[root@vm-1 ~] lxc stop lxd-ct [root@vm-1 ~] lxc list +--------+---------+------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +--------+---------+------+------+------------+-----------+ | lxd-ct | STOPPED | | | PERSISTENT | 0 | +--------+---------+------+------+------------+-----------+
While system containers are designed to run multiple processes and services, application containers
such as Docker are designed to package and run a single service. Docker also uses image servers for hosting already built images. Check out Docker Hub for both official images and also user uploaded ones.
The KVM machine vm-2
already has Docker installed using the official guide. Let's start by logging into the KVM machine:
[student@saisp ~] $ ssh root@10.0.0.2
Let's check if any Docker container is running on this host or if there is any cointainer images on the system:
root@vm-2:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE root@vm-2:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Since there are no containers,let's search for the alpine
image, one of the smallest Linux Distributions, on the Docker Hub:
[root@vm-2 ~] docker search alpine NAME DESCRIPTION STARS OFFICIAL AUTOMATED alpine A minimal Docker image based on Alpine Linux… 3328 [OK] ...
The command will output any image that has alpine
in it's name. The first one is the official alpine Docker image. Let's use it to start a container interactively:
[root@vm-2 ~] docker run -it alpine /bin/sh / # / # ps aux PID USER TIME COMMAND 1 root 0:01 /bin/sh 5 root 0:00 ps aux / #
In a similar fashion to LXD containers, Docker will first download the image locally and then start a container in which the only process is the /bin/sh
invoked terminal. We can exit the container with CTRL - D
.
If we check the container list, we can see the previously created container and its ID (cb62c4e3a0da) and name (condescending_pare):
[root@vm-2 ~] docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cb62c4e3a0da alpine "/bin/sh" About a minute ago Exited (0) 44 seconds ago condescending_pare
We can always reuse the same container, start it again and then attaching to its terminal or simply running a command in it. Below you can see some of the possible commands:
[root@vm-2 ~] docker start cb62c4e3a0da # start the container with ID cb62c4e3a0da [root@vm-2 ~] docker exec cb62c4e3a0da cat /etc/os-release # execute a command inside a container without attaching to its terminal NAME="Alpine Linux" ID=alpine VERSION_ID=3.7.0 PRETTY_NAME="Alpine Linux v3.7" HOME_URL="http://alpinelinux.org" BUG_REPORT_URL="http://bugs.alpinelinux.org"
The goal now is to dockerize and run a simple Node.js application that exposes a simple REST API. On the vm-2
node, in the /root/messageApp
path you can find the sources of the application and also a Dockerfile
.
A Dockerfile is a file that contains all the commands a user could call on the command line to assemble an image. Using the Dockerfile in conjunction with docker build
users can create automatically container images.
Lets' understand the syntax of our Dockerfile /root/messageApp/Dockerfile
:
# Use node 4.4.5 LTS FROM node:4.4.5 # Copy source code COPY . /app # Change working directory WORKDIR /app # Install dependencies RUN npm install # Expose API port to the outside EXPOSE 80 # Launch application CMD ["npm","start"]
The actions performed by the Dockerfile are the following:
FROM node:4.4.5
- starts from the node base imageCOPY . /app
- copies the sources from the vm-1
host cuurent directory (messageApp
) to the /app
folder in the containerRUN npm install
- installs all dependeciesEXPOSE 80
- exposes port 80 to the host machineCMD [“npm”,”start”]
- launches the Node.js applicationTo build the application image we would issue a command like the following:
[root@vm-2 ~/messageApp] docker build -t message-app .
The -t
parameter is used to specifies the name of the new image while the dot at the end of the command specifies where to find the Dockerfile
, in our case - the currect directory.
[root@vm-2 ~] docker pull ioanaciornei/message-app:4
While the message-app
image is building or downloading (depending on what you chose) you can start to complete the next step.
Docker can be used in swarm mode to natively manage a cluster of Docker Hosts that can each run multiple containers. You can read more on the possibilities when using swarm on the official documentation.
The KVM machines vm-1
and vm-2
already create a cluster from Docker's standpoint. While vm-2
is the manager node
, vm-1
is just a cluster node. You can check the cluster organization using docker node
:
[root@vm-2 ~] # docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS sf7ch7hwr0hxr2r1fy9buco4o vm-1 Ready Active 9p0di4ilnoi33akaeds5nar54 * vm-2 Ready Active Leader
In order enable connectivity between containers running on both machines, we need to create an overlay network
- a multi-host network from the manager node - vm-2
:
[root@vm-2 ~] docker network create --attachable -d overlay appnet btv9y67io40eiqf8r39v13sf3 [root@vm-2 ~] docker network ls NETWORK ID NAME DRIVER SCOPE btv9y67io40e appnet overlay swarm ...
The application messageApp
that is deployed in step 10 also needs a connection to a mongodb
server. For this, we will start another container on vm-1
that hosts it. We are using the official mongo
image and also connecting the new container on the overlay network:
[root@vm-1 ~] docker run -d --name mongo --net=appnet mongo:3.2 [root@vm-1 ~] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f594d1f22208 mongo:3.2 "docker-entrypoint.s…" 3 minutes ago Up About a minute 27017/tcp mongo
Now that we have a mongodb container up and running on the appnet
multi-host network, let's test the connectivity by starting a container on vm-2
:
[root@vm-2 ~] docker run -ti --name box --net=appnet alpine sh / # ping mongo PING mongo (10.0.0.4): 56 data bytes 64 bytes from 10.0.0.4: seq=0 ttl=64 time=82.344 ms 64 bytes from 10.0.0.4: seq=1 ttl=64 time=13.845 ms
Finally, start a container using the message-app
image:
[root@vm-2 ~] docker run -d --net=appnet ioanaciornei/message-app:4
It may take a while for the container to start. You can check the logs using the docker logs
command:
[root@vm-2 ~] docker logs <container-id>
You can check that the Node.js application is running and that it has access to the mongodb container as follows:
[root@vm-2 ~] curl http://172.18.0.4:1337/message [][root@vm-2 ~] [root@vm-2 ~] curl -XPOST http://172.18.0.4:1337/message?text=finally-done { "text": "finally-done", "createdAt": "2018-03-20T17:09:00.933Z", "updatedAt": "2018-03-20T17:09:00.933Z", "id": "5ab1402df90c5b10009f86bd" }[root@vm-2 ~] curl http://172.18.0.4:1337/message [ { "text": "finally-done", "createdAt": "2018-03-20T17:09:00.933Z", "updatedAt": "2018-03-20T17:09:00.933Z", "id": "5ab1402df90c5b10009f86bd" }