Differences

This shows you the differences between two versions of the page.

Link to this comparison view

scgc:laboratoare:04 [2018/10/24 18:09]
alexandru.carp [Lab Setup]
scgc:laboratoare:04 [2021/10/20 19:09] (current)
alexandru.carp [4. [LXC] Process hierarchy]
Line 1: Line 1:
-===== Laboratory 04. Container based virtualization =====+===== Container based virtualization =====
  
 ===== Lab Setup ===== ===== Lab Setup =====
  
-  * We will be using a virtual machine in the [[http://​cloud.curs.pub.ro/​|faculty'​s cloud]]. +  * We will be using a virtual machine in the [[http://​cloud.grid.pub.ro/​|faculty'​s cloud]].
-  * When creating a virtual machine follow the steps in this [[https://​cloud.curs.pub.ro/​about/​tutorial-for-students/​|tutorial]].+
   * When creating a virtual machine in the Launch Instance window:   * When creating a virtual machine in the Launch Instance window:
-    * For **Availability zone**, choose **CAMPUS**, **CI** or **hp** 
     * Select **Boot from image** in **Instance Boot Source** section     * Select **Boot from image** in **Instance Boot Source** section
-    * Select **SCGC Template ​v1** in **Image Name** section+    * Select **SCGC Template** in **Image Name** section
     * Select a flavor that is at least **m1.medium**.     * Select a flavor that is at least **m1.medium**.
   * The username for connecting to the VM is ''​student''​   * The username for connecting to the VM is ''​student''​
   * Within the above virtual machine, we will be running two KVM virtual machines (''​vm-1'',​ ''​vm-2''​)   * Within the above virtual machine, we will be running two KVM virtual machines (''​vm-1'',​ ''​vm-2''​)
-  * First, download the laboratory archive:<​code bash> +  * First, download the laboratory archive: 
-student@saisp:~$ wget --user=<​username>​ --ask-password http://​repository.grid.pub.ro/​cs/​scgc/​laboratoare/​lab-04.zip +<code bash> 
-student@saisp:~$ unzip lab-04.zip +student@scgc:~$ cd scgc/ 
-student@saisp:​~$ cd lab-04/+student@scgc:​~/​scgc$ wget --user=<​username>​ --ask-password http://​repository.grid.pub.ro/​cs/​scgc/​laboratoare/​lab-04.zip 
 +student@scgc:~/scgc$ unzip lab-04.zip
 </​code>​ </​code>​
   * After unzipping the archive, several KVM image files (''​qcow2''​ format) should be present, as well as two scripts (''​lab04-start''​ and ''​lab04-stop''​)   * After unzipping the archive, several KVM image files (''​qcow2''​ format) should be present, as well as two scripts (''​lab04-start''​ and ''​lab04-stop''​)
-  * To run the virtual machines, use the ''​lab04-start''​ script:<​code bash> +  * To run the virtual machines, use the ''​lab04-start''​ script: 
-student@saisp:~/lab-04./lab04-start+<code bash> 
 +student@scgc:~/scgcsh lab04-start
 </​code>​ </​code>​
   * It may take a minute for the virtual machines to start   * It may take a minute for the virtual machines to start
   * In order to connect to each of the machines, use the following command (substitute X with 1, 2):   * In order to connect to each of the machines, use the following command (substitute X with 1, 2):
-  * <code bash> +<code bash> 
-student@saisp:~/lab-04$ ssh root@10.0.0.1</​code>​+student@scgc:~/scgc$ ssh student@10.0.0.
 +</​code>​
   * The password for both ''​student''​ and ''​root''​ users is ''​student''​   * The password for both ''​student''​ and ''​root''​ users is ''​student''​
  
Line 33: Line 34:
 </​code>​ </​code>​
 </​note>​ </​note>​
 +
 ===== Tasks ====== ===== Tasks ======
  
-==== 1. [5p] [LXC] Check for LXC support ====+==== 1. [LXC] Check for LXC support ====
  
-LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel. The support is integrated in the mainline kernel starting with version 2.6.29. That means any linux kernel, starting with that version, if properly configured can support LXC containers.+LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel. The support is integrated in the mainline kernel starting with version 2.6.29. That means any Linux kernel, starting with that version, if properly configured can support LXC containers.
  
 Some key aspects related to LXC: Some key aspects related to LXC:
Line 49: Line 51:
 Start by connecting to ''​vm-1'':​ Start by connecting to ''​vm-1'':​
 <code bash> <code bash>
-[student@saisp ~$ ssh root@10.0.0.1+student@scgc:~/scgc$ ssh student@10.0.0.1
 </​code>​ </​code>​
 +
 +You can use ''​sudo su''​ to switch user to ''​root''​ after connecting.
  
 Using the ''​lxc-checkconfig''​ command, check if the hardware node's kernel supports LXC: Using the ''​lxc-checkconfig''​ command, check if the hardware node's kernel supports LXC:
 <code bash> <code bash>
-[root@vm-1 ~lxc-checkconfig+root@vm-1:~lxc-checkconfig
 </​code>​ </​code>​
  
-Also, verify if that the ''​cgroup''​ filesystem is mounted: ​+Also, verify if that the ''​cgroup''​ filesystem is mounted:
 <code bash> <code bash>
-[root@vm-1 ~mount+root@vm-1:~mount
 </​code>​ </​code>​
  
  
-==== 2. [5p] [LXC] Basic interaction ​====+==== 2. [LXC] Create basic containers ​====
  
-The KVM virtual machine ​''​vm-1'' ​already has a container created - ''​ct1''​ :+The ''​lxc-create'' ​tool is used in order to facilitate the creation of LXC container. Upon issuing a ''​lxc-create''​ command the following actions are made: 
 +  * a minimal configuration file is created 
 +  * the basic root filesystem for the container is created by downloading the necessary packages from remote repositories 
 + 
 +The command syntax is the following:​ 
 +<code bash> 
 +lxc-create -n NAME -t TEMPLATE 
 +</​code>​ 
 + 
 +<spoiler Alpine container create error fix> 
 +If the command fails with the following error (reported [[https://​gitlab.alpinelinux.org/​alpine/​aports/​-/​issues/​12326|here]]):​ 
 +<​code>​ 
 +lxc-create: ct1: lxccontainer.c:​ create_run_template:​ 1617 Failed to create container from template 
 +lxc-create: ct1: tools/​lxc_create.c:​ main: 327 Failed to create container ct1 
 +</​code>​ 
 + 
 +You will need to apply this patch to the file: 
 +<code diff lxc-alpine.patch>​ 
 +@@ -281 +281 @@ 
 +- mknod -m 666 dev/zero c 1 5 
 ++ mknod -m 666 dev/zero c 1 5 || true 
 +@@ -283,2 +283,2 @@ 
 +- mknod -m 666 dev/random c 1 8 
 +- mknod -m 666 dev/urandom c 1 9 
 ++ mknod -m 666 dev/random c 1 8 || true 
 ++ mknod -m 666 dev/urandom c 1 9 || true 
 +@@ -293 +293 @@ 
 +- mknod -m 620 dev/console c 5 1 
 ++ mknod -m 620 dev/console c 5 1 || true 
 +</​code>​ 
 + 
 +To apply the patch, use  
 +<code bash> 
 +patch /​usr/​share/​lxc/​templates/​lxc-alpine < lxc-alpine.patch 
 +</​code>​ 
 +</​spoiler>​ 
 +\\ 
 + 
 +Some of the values for ''​TEMPLATE''​ are: ''​alpine'',​ ''​ubuntu'',​ ''​busybox'',​ ''​sshd'',​ ''​debian'',​ or ''​fedora''​ and specifies the template script that will be employed when creating the ''​rootfs''​. All the available template scripts are available in the following location: ''/​usr/​share/​lxc/​templates/''​. 
 + 
 +Create a new container using the ''​alpine''​ template with the ''​ct1''​
 +You can inspect the configuration file for this new container: ''/​var/​lib/​lxc/​ct1/​config''​. 
 + 
 +<code bash> 
 +root@vm-1:​~#​ lxc-create -n ct1 -t alpine 
 +root@vm-1:​~#​ lxc-ls 
 +root@vm-1:​~#​ cat /​var/​lib/​lxc/​ct1/​config 
 +</​code>​ 
 + 
 +==== 3. [LXC] Basic interaction ==== 
 + 
 +To see that the '​ct1'​ container has been created created, on ''​vm-1''​ run:
 <code bash> <code bash>
-[root@vm-1 ~lxc-ls+root@vm-1:~lxc-ls
 ct1 ct1
 </​code>​ </​code>​
Line 73: Line 128:
 Start the container by issuing the following command: Start the container by issuing the following command:
 <code bash> <code bash>
-[root@vm-1 ~lxc-start -n ct1 -F                                                                                                                                                                                                            +root@vm-1:~lxc-start -n ct1 -F
  
-   ​OpenRC 0.24.1.a941ee4a0b ​is starting up Linux 4.4.0-116-generic ​(x86_64) [LXC]+   ​OpenRC 0.42.1.d76962aa23 ​is starting up Linux 4.19.0-8-amd64 (x86_64) [LXC]
  
  * /proc is already mounted  * /proc is already mounted
Line 87: Line 141:
  * Starting busybox syslog ... [ ok ]  * Starting busybox syslog ... [ ok ]
  * Starting busybox crond ... [ ok ]  * Starting busybox crond ... [ ok ]
- * Starting networking ... *   eth0 ...udhcpcstarted, v1.27.2 + * Starting networking ... *   eth0 ...ipioctl 0x8913 failedNo such device 
-udhcpcsending discover + [ !! ] 
-udhcpc: sending select for 10.0.1.38 + * ERRORnetworking failed to start
-udhcpclease of 10.0.1.38 obtained, lease time 3600 +
- [ ok ]+
  
-Welcome to Alpine Linux 3.7 +Welcome to Alpine Linux 3.11 
-Kernel 4.4.0-116-generic ​on an x86_64 (/​dev/​console)+Kernel 4.19.0-8-amd64 on an x86_64 (/​dev/​console)
  
-ct1 login: ​ +ct1 login:
 </​code>​ </​code>​
  
Line 112: Line 164:
 Sent SIGKILL to all processes Sent SIGKILL to all processes
 Requesting system halt Requesting system halt
-[root@vm-1 ~+root@vm-1:~#
 </​code>​ </​code>​
  
 By adding the '' ​ -d, --daemon''​ argument to the ''​lxc-start''​ command, the container can be started in **background**:​ By adding the '' ​ -d, --daemon''​ argument to the ''​lxc-start''​ command, the container can be started in **background**:​
 <code bash> <code bash>
-[root@vm-1 ~lxc-start -n ct1 -d+root@vm-1:~lxc-start -n ct1 -d
 </​code>​ </​code>​
  
 Verify the container state using ''​lxc-info'':​ Verify the container state using ''​lxc-info'':​
 <code bash> <code bash>
-[root@vm-1 ~lxc-info -n ct1+root@vm-1:~lxc-info -n ct1
 Name:           ct1 Name:           ct1
 State: ​         RUNNING State: ​         RUNNING
-PID:            ​2101 +PID:            ​1977 
-IP:             ​10.0.1.38 +CPU use:        ​0.32 seconds 
-CPU use:        ​6.73 seconds +BlkIO use:      ​4.00 KiB 
-BlkIO use:      ​8.00 KiB +Memory use:     1.42 MiB 
-Memory use:     500.00 KiB +KMem use:       1.01 MiB
-KMem use:       0 bytes +
-Link:           ​veth7NF55U +
- TX bytes: ​     ​1.30 KiB +
- RX bytes: ​     1.48 KiB +
- Total bytes: ​  2.78 KiB+
 </​code>​ </​code>​
  
 Finally, we can connect to the container'​s console using ''​lxc-console'':​ Finally, we can connect to the container'​s console using ''​lxc-console'':​
 <code bash> <code bash>
-[root@vm-1 ~lxc-console -n ct1+root@vm-1:~lxc-console -n ct1
  
 Connected to tty 1 Connected to tty 1
-                  ​Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself+Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself
  
-Welcome to Alpine Linux 3.7 +Welcome to Alpine Linux 3.11 
-Kernel 4.4.0-116-generic ​on an x86_64 (/dev/tty1)+Kernel 4.19.0-8-amd64 on an x86_64 (/dev/tty1)
  
-ct1 login: ​+ct1 login:
 </​code>​ </​code>​
  
 We can disconnect from the container'​s console, without stopping it, using the **CTRL+A, Q** key combination. We can disconnect from the container'​s console, without stopping it, using the **CTRL+A, Q** key combination.
  
-==== 3[5p] [LXC] Process hierarchy ====+==== 4. [LXC] Process hierarchy ====
  
 Using the ''​lxc-info''​ command, find out the ''​ct1''​ PID which will correspond to the container'​s init process. Any other process running in the container will be child processes of this ''​init''​. Using the ''​lxc-info''​ command, find out the ''​ct1''​ PID which will correspond to the container'​s init process. Any other process running in the container will be child processes of this ''​init''​.
  
 <code bash> <code bash>
-[root@vm-1 ~lxc-info -n ct1+root@vm-1:~lxc-info -n ct1
 Name:           ct1 Name:           ct1
 State: ​         RUNNING State: ​         RUNNING
-PID:            ​2101 +PID:            ​1977 
-IP:             ​10.0.1.38 +CPU use:        ​0.33 seconds 
-CPU use:        ​6.81 seconds +BlkIO use:      ​4.00 KiB 
-BlkIO use:      ​8.00 KiB +Memory use:     1.66 MiB 
-Memory use:     684.00 KiB +KMem use:       1.06 MiB
-KMem use:       0 bytes +
-Link:           ​veth7NF55U +
- TX bytes: ​     ​1.34 KiB +
- RX bytes: ​     1.52 KiB +
- Total bytes: ​  2.86 KiB+
 </​code>​ </​code>​
  
 From one terminal, connect to the ''​ct1''​ console: From one terminal, connect to the ''​ct1''​ console:
 <code bash> <code bash>
-[root@vm-1 ~lxc-console -n ct1+root@vm-1:~lxc-console -n ct1
 </​code>​ </​code>​
  
 From other terminal in the ''​vm-1'',​ print the process hierarchy starting with the container'​s PID: From other terminal in the ''​vm-1'',​ print the process hierarchy starting with the container'​s PID:
 <code bash> <code bash>
-[root@vm-1 ~pstree --ascii -s -c -p 2101 +# Install pstree 
-systemd(1)---lxc-start(2091)---init(2101)-+-crond(2373+root@vm-1:​~#​ apt update 
-                                          |-getty(2424+root@vm-1:​~#​ apt install psmisc 
-                                          |-getty(2425+root@vm-1:~pstree --ascii -s -c -p 1977 
-                                          |-getty(2426+systemd(1)---lxc-start(1974)---init(1977)-+-crond(2250
-                                          |-getty(2427+                                          |-getty(2285
-                                          |-login(2423)---ash(2432+                                          |-getty(2286
-                                          ​|-syslogd(2352) +                                          |-getty(2287
-                                          `-udhcpc(2414)+                                          |-getty(2288
 +                                          |-login(2284)---ash(2297
 +                                          ​`-syslogd(2222)
 </​code>​ </​code>​
  
Line 195: Line 239:
 <code bash> <code bash>
 ct1:~# ps -ef ct1:~# ps -ef
-PID   ​USER ​    ​TIME ​  ​COMMAND +PID   ​USER ​    ​TIME ​ COMMAND 
-    1 root       ​0:00 /​sbin/​init +    1 root      0:00 /​sbin/​init 
-  ​213 root       ​0:00 /​sbin/​syslogd -Z +  ​246 root      0:00 /​sbin/​syslogd -t 
-  ​234 root       ​0:00 /​usr/​sbin/​crond -c /​etc/​crontabs +  ​274 root      0:00 /​usr/​sbin/​crond -c /​etc/​crontabs 
-  ​275 root       0:00 udhcpc -b -p /​var/​run/​udhcpc.eth0.pid -i eth0 -x hostname:​ct1 +  ​307 root      0:00 /bin/login -- root 
-  284 root       ​0:00 /bin/login -- root +  ​308 root      0:00 /sbin/getty 38400 tty2 
-  ​285 root       ​0:00 /sbin/getty 38400 tty2 +  ​309 root      0:00 /sbin/getty 38400 tty3 
-  ​286 root       ​0:00 /sbin/getty 38400 tty3 +  ​310 root      0:00 /sbin/getty 38400 tty4 
-  ​287 root       ​0:00 /sbin/getty 38400 tty4 +  ​311 root      0:00 /sbin/getty 38400 console 
-  ​288 root       ​0:00 /sbin/getty 38400 console +  ​312 root      0:00 -ash 
-  ​289 root       ​0:00 -ash +  ​314 root      0:00 ps -ef
-  ​290 root       ​0:00 ps -ef+
 </​code>​ </​code>​
  
 Even though the same processes can be observed from within or outside of the container, the process PIDs are different. This is because the operating system translates the process space for each container. Even though the same processes can be observed from within or outside of the container, the process PIDs are different. This is because the operating system translates the process space for each container.
  
-==== 4[5p] [LXC] Filesystem ====+==== 5. [LXC] Filesystem ====
  
 LXC containers have their filesystem stored in the host machine under the following path: ''/​var/​lib/​lxc/<​container-name>/​rootfs/''​. LXC containers have their filesystem stored in the host machine under the following path: ''/​var/​lib/​lxc/<​container-name>/​rootfs/''​.
Line 220: Line 263:
   * Verify that the changes are also visible from the container.   * Verify that the changes are also visible from the container.
  
-==== 5[5p] [LXC] Create a basic container ​====+==== 6. [LXC] Networking ​====
  
-The ''​lxc-create''​ tool is used in order to facilitate the creation of a LXC container. Upon issuing a ''​lxc-create''​ command the following actions ​are made: +By default, our LXC containers ​are not connected to the exterior.
-  * a minimal configuration file is created +
-  * the basic root filesystem for the container is created by downloading the necessary packages from remote repositories +
- +
-The command syntax is the following:​ +
-<code bash> +
-lxc-create -n NAME -t TEMPLATE +
-</​code>​ +
-where ''​TEMPLATE''​ can be one of the following ''​alpine'',​ ''​ubuntu'',​ ''​busybox'',​ ''​sshd'',​ ''​debian'',​ ''​fedora''​ and specifies the template script that will be employed when creating the ''​rootfs''​. All the available template scripts are available in the following location: ''/​usr/​share/​lxc/​templates/''​.+
  
-Create a new container using the ''​alpine''​ template with the ''​ct2''​ name and verify its existence using ''​lxc-ls''​. +In order customize ​the network configuration,​ we will use the default bridge ​''​docker0'' ​to serve both containers.
-You can inspect the configuration file for this new container: ''/​var/​lib/​lxc/​ct2/​config''​.+
  
 +In order to set the custom network in the container up, change the configuration file for ''​ct1''​ so that it will match the following listing:
 <code bash> <code bash>
-[root@vm-1 ~] lxc-create -n ct2 -t alpine +# virtual ethernet ​level 2  virtualization 
-[root@vm-1 ~] lxc-ls +lxc.net.0.type = veth 
-[root@vm-1 ~] cat /var/lib/lxc/ct2/config+# bring the interface up at container creation 
 +lxc.net.0.flags = up 
 +# connect the container to the bridge br0 from the host 
 +lxc.net.0.link = docker0 
 +# interface name as seen in the container 
 +lxc.net.0.name = eth0 
 +# interface name as seen in the host system 
 +lxc.net.0.veth.pair = lxc-veth-ct1
 </​code>​ </​code>​
  
-Then you can interact with the container: +<note warning> 
-  * start ''​ct2''​ in the background +Keep the comments outside of the configuration file, because ​the lxc syntax does not permit them. 
-  * connect to the console using ''​lxc-console''​ +</​note>​
-  * display the process hierarchy for both containers +
-  * stop both of the containers using ''​lxc-stop''​+
  
-==== 6[10p] [LXC] Networking ====+Create a new container called ''​ct2''​ and make the same changes its config file changing only the ''​lxc.net.0.veth.pair''​ attribute to ''​lxc-veth-ct2''​.
  
-By default, the LXC containers ​are connected to the exterior using the ''​lxcbr0'' ​bridge ​with randomly selected IP subnets.+Start both containers ​in the background and then check the bridge ​state:
  
-In order customize the network configuration,​ we will create a new ''​bridge''​ that will serve both containers. 
- 
-Create the ''​bridge''​ on the host system ''​vm-1'':​ 
 <code bash> <code bash>
-[root@vm-1 ~brctl addbr br-lxc +root@vm-1:~brctl show docker0 
-[root@vm-1 ~] ip link set dev br-lxc up+bridge name     ​bridge id               STP enabled ​    ​interfaces 
 +docker0 ​        ​8000.0242c37f8020 ​      ​no ​             lxc-veth-ct1 
 +                                                        ​lxc-veth-ct2
 </​code>​ </​code>​
  
-In order to setupthe custom ​network ​in the container, change the configuration file for ''​ct1'' ​so that it will match the following listing:+Configure the following interfaces using a 172.17.0.0/​24 ​network ​space: 
 +  * the ''​eth0''​ interface from ''​ct1'' ​- 172.17.0.11 
 +  * the ''​eth0''​ interface from ''​ct2''​ - 172.17.0.12 
 + 
 +Test the connectivity between the host system and the containers:
 <code bash> <code bash>
-lxc.network.type = veth                  # virtual ethernet ​level 2  virtualization +root@vm-1:~ping -c 1 172.17.0.11
-lxc.network.flags = up                    ​bring the interface up at container creation +
-lxc.network.link = br-lxc                # connect the container to the bridge br0 from the host +
-lxc.network.name = eth0                  # interface name as seen in the container +
-lxc.network.veth.pair = lxc-veth0-ct1 ​   # interface name as seen in the host system+
 </​code>​ </​code>​
  
-<note warning>​Keep ​the comments outside of the configuration file, because the lxc syntax does not permit them.</​note>​ +And also between ​the containers:
- +
-Make the same changes to the ''​ct2''​ config file changing only the ''​lxc.network.veth.pair''​ attribute to ''​lxc-veth0-ct2''​. +
- +
-Start both containers ​in the background and then check the bridge state: +
 <code bash> <code bash>
-[root@vm-1 ~] brctl show br-lxc +ct1:~# ping -172.17.0.12
-bridge name     ​bridge id               STP enabled ​    ​interfaces +
-br-lxc ​         8000.fe59adb65324 ​      ​no ​             lxc-veth0-ct1 +
-                                                        lxc-veth0-ct2+
 </​code>​ </​code>​
- 
-Configure the following interfaces using a 11.0.0.0/24 network space: 
-  * the ''​br-lxc''​ bridge from ''​vm-1''​ - 11.0.0.11 
-  * the ''​eth0''​ interface from ''​ct1''​ - 11.0.0.1 
-  * the ''​eth0''​ interface from ''​ct2''​ - 11.0.0.2 
- 
-Test the connectivity between the host system and the containers:<​code bash> 
-[root@vm-1 ~] ping -c 1 11.0.0.1</​code>​ 
-And also between the containers:<​code bash> 
-ct1:~# ping -c 2 11.0.0.2</​code>​ 
  
  
 Configure NAT and enable routing on the host system so that from within the containers we have access to the internet: Configure NAT and enable routing on the host system so that from within the containers we have access to the internet:
 <code bash> <code bash>
-[root@vm-1 ~iptables -t nat -A POSTROUTING -o ens3 -s 11.0.0.0/24 -j MASQUERADE ​                                    +root@vm-1:~# echo 1 > /​proc/​sys/​net/​ipv4/​ip_forward 
-[root@vm-1 ~] echo 1 > /​proc/​sys/​net/​ipv4/​ip_forward+# if the default iptables rules on docker0 do not allow NAT functionality already 
 +root@vm-1:​~# ​iptables -t nat -A POSTROUTING -o ens3 -s 172.17.0.0/24 -j MASQUERADE
 </​code>​ </​code>​
  
 Add the default route on both containers and test the internet connectivity:​ Add the default route on both containers and test the internet connectivity:​
 <code bash> <code bash>
-ct1:~# ip route add default via 11.0.0.11+ct1:~# ip route add default via 172.17.0.1
 ct1:~# ping -c 1 www.google.com ct1:~# ping -c 1 www.google.com
 </​code>​ </​code>​
-==== 7. [15p] [LXD] Intro  ==== 
  
-LXD is a next generation **system ​container** managerIt offers a user experience similar to virtual machines but using Linux containers insteadSystem containers are designed ​to run multiple processes and services and for all practical purposes, you can think of OS containers as VMswhere you can run multiple processes, install packages etc.+<​note>​ 
 +If the Linux container ​cannot ping ''​www.google.com'',​ but is able to ping ''​1.1.1.1''​a valid DNS resolver was likely not configured.
  
-LXD it'​s ​image based with pre-made images available for wide number of Linux distributions ​and is built around a very powerfulyet pretty simpleREST API.+To fix this, run ''​%%echo "​nameserver 1.1.1.1"​ > /​etc/​resolv.conf%%''​ to use Cloudflare'​s ​DNS resolver. 
 +</​note>​ 
 + 
 +==== 7. [LXD] Intro  ==== 
 + 
 +LXD is next generation **system container** manager. It offers a user experience similar to virtual machines but using Linux containers instead. System containers are designed to run multiple processes ​and services and for all practical purposes. You can think of OS containers as VMswhere you can run multiple processesinstall packages etc.
  
-Let's start by installing LXD on ''​vm-1'':​+LXD has it's image based on pre-made images available for a wide number of Linux distributions and is built around a very powerful, yet pretty simple, REST API. 
 + 
 +Let's start by installing LXD on ''​vm-1'' ​using snap and setup the ''​PATH''​ variable so we can use it easily:
 <code bash> <code bash>
-[root@vm-1 ~apt-get install lxd+root@vm-1:~apt install snapd 
 +root@vm-1:~# snap install ​--channel=2.0/​stable ​lxd 
 +root@vm-1:​~#​ export PATH="​$PATH:/​snap/​bin"​
 </​code>​ </​code>​
  
 The LXD initialization process is can be started using ''​lxd init'':​ The LXD initialization process is can be started using ''​lxd init'':​
 <code bash> <code bash>
-[root@vm-1 ~lxd init+root@vm-1:~lxd init
 </​code>​ </​code>​
  
-You will be prompted to specify details about the storage backend for the LXD containers and also networking options.+You will be prompted to specify details about the storage backend for the LXD containers and also networking options:
  
-First, ​you will be asked if LXD should ​configurea ​new storage pool and what name it should ​take:<​code bash> +<​code>​ 
-Do you want to configure a new storage pool (yes/no) [default=yes]? ​yes +root@vm-1:​~#​ lxd init 
-Name of the storage backend to use (dir or zfs) [default=dir]: ​</​code>​ +Do you want to configure the LXD bridge (yes/no) [default=yes]?​ # press Enter 
- +What should ​the new bridge be called [default=lxdbr0]?​ # press Enter 
-Next, the networking details need to be setup: <code bash>+What IPv4 address ​should ​be used (CIDR subnet notation, “auto” or “none”) [default=auto]?​ 12.0.0.1/​24 
 +Would you like LXD to NAT IPv4 traffic on your bridge? [default=yes]?​ # press Enter 
 +What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]?​ # press Enter 
 +Do you want to configure a new storage pool (yes/no) [default=yes]? ​# press Enter 
 +Name of the storage backend to use (dir or zfs) [default=dir]: ​# press Enter
 Would you like LXD to be available over the network (yes/no) [default=no]?​ yes Would you like LXD to be available over the network (yes/no) [default=no]?​ yes
 Address to bind LXD to (not including port) [default=all]:​ 10.0.0.1 Address to bind LXD to (not including port) [default=all]:​ 10.0.0.1
-Port to bind LXD to [default=8443]:​  +Port to bind LXD to [default=8443]: ​# press Enter 
-Trust password for new clients:  +Trust password for new clients: ​# enter a password and press Enter 
-Again: ​ +Again: ​# re-enter ​the same password and press enter 
-Do you want to configure ​the LXD bridge (yes/no) [default=yes]?​ yes+LXD has been successfully configured.
 </​code>​ </​code>​
  
-After this step, an ncurses interface will be brought up with further configuration options: <code bash> 
-Would you like to setup a network bridge for LXD containers now?  yes 
-Do you want to setup an IPv4 subnet? Yes 
-Bridge interface name: lxdbr0 
-IPv4 address: 12.0.0.0 
-IPv4 CIDR mask: 24 
-First DHCP address: 12.0.0.2 
-Last DHCP address: 12.0.0.254 
-Max number of DHCP clients: 252 
-Do you want to NAT the IPv4 traffic: Yes 
-Do you want to setup an IPv6 subnet: No </​code>​ 
  
 We have now successfully configured LXD storage backend and also networking. We can verify that ''​lxdbr0''​ was properly configured with the given subnet: We have now successfully configured LXD storage backend and also networking. We can verify that ''​lxdbr0''​ was properly configured with the given subnet:
 <code bash> <code bash>
-[root@vm-1 ~brctl show lxdbr0+root@vm-1:~brctl show lxdbr0
 bridge name     ​bridge id               STP enabled ​    ​interfaces bridge name     ​bridge id               STP enabled ​    ​interfaces
 lxdbr0 ​         8000.000000000000 ​      no lxdbr0 ​         8000.000000000000 ​      no
-[root@vm-1 ~] ifconfig ​lxdbr0 +root@vm-1:~# ip address show lxdbr0 
-lxdbr0 ​   ​Link encap:Ethernet ​ HWaddr a2:0b:c9:16:1d:d1  ​ +13: lxdbr0: <​BROADCAST,​MULTICAST,​UP,​LOWER_UP>​ mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 
-          inet addr:12.0.0.0  Bcast:​0.0.0.0 ​ Mask:​255.255.255.0 +    link/ether c6:fd:e7:04:9c:de brd ff:ff:​ff:​ff:​ff:​ff 
-          inet6 addrfe80::a00b:c9ff:fe16:1dd1/64 Scope:Link +    inet 12.0.0.1/24 scope global lxdbr0 
-          UP BROADCAST RUNNING MULTICAST ​ MTU:​1500 ​ Metric:1 +       ​valid_lft forever preferred_lft forever 
-          RX packets:0 errors:0 dropped:0 overruns:0 frame:0 +    inet6 fd42:89a:615d:8d24::1/64 scope global 
-          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 +       valid_lft forever preferred_lft forever 
-          collisions:​0 txqueuelen:​1000  +    inet6 fe80::c4fd:e7ff:fe04:9cde/64 scope link 
-          RX bytes:0 (0.0 B)  TX bytes:800 (800.0 B)+       valid_lft forever preferred_lft forever
 </​code>​ </​code>​
  
 Use ''​lxc list''​to show the available LXD containers on the host system: Use ''​lxc list''​to show the available LXD containers on the host system:
 <code bash> <code bash>
-[root@vm-1 ~# lxc list+root@vm-1:~# lxc list
 Generating a client certificate. This may take a minute... Generating a client certificate. This may take a minute...
-If this is your first time using LXD, you should also run: sudo lxd init 
-To start your first container, try: lxc launch ubuntu:​16.04 
- 
 +------+-------+------+------+------+-----------+ +------+-------+------+------+------+-----------+
 | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
Line 374: Line 395:
 </​code>​ </​code>​
  
-This is the first time the ''​lxc''​ client tool communicates with the ''​lxd''​ daemon and let's the user know that it automatically ​geenrates ​a client certificate for secure connections with the backend. Finally, the command outputs a list of available containers, which is empty at the moment since we did not create any yet.+This is the first time the ''​lxc''​ client tool communicates with the ''​lxd''​ daemon and let's the user know that it automatically ​generates ​a client certificate for secure connections with the back-end. Finally, the command outputs a list of available containers, which is empty at the moment since we did not create any yet.
  
-==== 8. [20p] [LXD] Start a system container ​ ====+<note important>​ 
 +The ''​lxc''​ tool is part of the ''​lxd''​ package, **not** the ''​lxc''​ one. It will only communicate with the ''​lxd''​ daemon, and will therefore not show any information about containers previously created. 
 +</​note>​ 
 + 
 +==== 8. [LXD] Start a system container ​ ====
  
 LXD uses multiple remote image servers. To list the default remotes we can use ''​lxc remote'':​ LXD uses multiple remote image servers. To list the default remotes we can use ''​lxc remote'':​
 <code bash> <code bash>
-[root@vm-1 ~lxc remote list+root@vm-1:~lxc remote list
 +-----------------+------------------------------------------+---------------+--------+--------+ +-----------------+------------------------------------------+---------------+--------+--------+
 |      NAME       ​| ​                  ​URL ​                   |   ​PROTOCOL ​   | PUBLIC | STATIC | |      NAME       ​| ​                  ​URL ​                   |   ​PROTOCOL ​   | PUBLIC | STATIC |
Line 391: Line 416:
 +-----------------+------------------------------------------+---------------+--------+--------+ +-----------------+------------------------------------------+---------------+--------+--------+
 | ubuntu-daily ​   | https://​cloud-images.ubuntu.com/​daily ​   | simplestreams | YES    | YES    | | ubuntu-daily ​   | https://​cloud-images.ubuntu.com/​daily ​   | simplestreams | YES    | YES    |
-+-----------------+------------------------------------------+---------------+--------+--------+++-----------------+------------------------------------------+---------------+--------+-------
 </​code>​ </​code>​
  
Line 399: Line 424:
   * images: (for a bunch of other distros)   * images: (for a bunch of other distros)
  
-We can list the available images on a specific remote using ''​lxc image list''​. In the below example, we list all the images from the ''​ubuntu''​ stable remote matching version ''​16.04'':​+We can list the available images on a specific remote using ''​lxc image list''​. In the below example, we list all the images from the ''​ubuntu''​ stable remote matching version ''​20.04'':​
 <code bash> <code bash>
-[root@vm-1 ~lxc image list ubuntu: ​16.04+root@vm-1:~lxc image list ubuntu: ​20.04
 +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+
 |       ​ALIAS ​       | FINGERPRINT ​ | PUBLIC |                  DESCRIPTION ​                 |  ARCH   ​| ​  ​SIZE ​  ​| ​        ​UPLOAD DATE          | |       ​ALIAS ​       | FINGERPRINT ​ | PUBLIC |                  DESCRIPTION ​                 |  ARCH   ​| ​  ​SIZE ​  ​| ​        ​UPLOAD DATE          |
 +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+
-(more)         ​| ​c5bbef7f4e1c ​| yes    | ubuntu ​16.04 LTS amd64 (release) (20180306)   | x86_64 ​ | 156.23MB Mar 6, 2018 at 12:00am (UTC) | +(more)         ​| ​647a85725003 ​| yes    | ubuntu ​20.04 LTS amd64 (release) (20200504)   | x86_64 ​ | 345.73MB May 4, 2020 at 12:00am (UTC) |
-+--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ +
-| x/arm64 (more)   | e3656464cb5e | yes    | ubuntu 16.04 LTS arm64 (release) (20180306) ​  | aarch64 | 139.40MB | Mar 6, 2018 at 12:00am (UTC) | +
-+--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ +
-| x/armhf (4 more)   | 30913be27940 | yes    | ubuntu 16.04 LTS armhf (release) (20180306) ​  | armv7l ​ | 139.25MB | Mar 62018 at 12:00am (UTC) |+
 +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+
-x/i386 (more)    39369220866d ​| yes    | ubuntu ​16.04 LTS i386 (release) (20180306   i686    ​155.79MB Mar 62018 at 12:00am (UTC) |+f/arm64 (more)   ​9cb323cab3f4 ​| yes    | ubuntu ​20.04 LTS arm64 (release) (20200504  ​aarch64 ​318.86MB May 42020 at 12:00am (UTC) |
 +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+
-x/powerpc ​(more) | 45ade88fd54e ​| yes    | ubuntu ​16.04 LTS powerpc ​(release) (20180306) | ppc     144.20MB Mar 62018 at 12:00am (UTC) |+f/armhf (more)   ​25b0b3d1edf9 ​| yes    | ubuntu ​20.04 LTS armhf (release) (20200504  ​armv7l  ​301.15MB May 42020 at 12:00am (UTC) |
 +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+
-x/ppc64el (more) | 20af016273ca ​| yes    | ubuntu ​16.04 LTS ppc64el (release) (20180306) | ppc64le | 156.55MB Mar 62018 at 12:00am (UTC) |+f/ppc64el (more) | 63ff040bb12b ​| yes    | ubuntu ​20.04 LTS ppc64el (release) (20200504) | ppc64le | 347.49MB May 42020 at 12:00am (UTC) |
 +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+
-x/s390x (more)   ​| ​2ab7d7bf0d78 ​| yes    | ubuntu ​16.04 LTS s390x (release) (20180306)   | s390x   ​| ​150.22MB Mar 62018 at 12:00am (UTC) |+f/s390x (more)   ​| ​d7868570a060 ​| yes    | ubuntu ​20.04 LTS s390x (release) (20200504)   | s390x   ​| ​315.86MB May 42020 at 12:00am (UTC) |
 +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+ +--------------------+--------------+--------+-----------------------------------------------+---------+----------+------------------------------+
 </​code>​ </​code>​
  
-As we can see, there are available images for multiple architectures including armhf, arm64, powerpc, amd64 etc. Since LXD containers are sharing the kernel with thehost ​system and there is also, there is no emulation support in containers, we need to choose the image matching the host architecture,​ in this case ''​x86_64''​ ( ''​amd64''​).+As we can see, there are available images for multiple architectures including armhf, arm64, powerpc, amd64 etc. Since LXD containers are sharing the kernel with the host system and there is also, there is no emulation support in containers, we need to choose the image matching the host architecture,​ in this case ''​x86_64''​ (''​amd64''​).
  
 Now that we have chosen the container image, let's start a container named ''​lxd-ct'':​ Now that we have chosen the container image, let's start a container named ''​lxd-ct'':​
 <code bash> <code bash>
-[root@vm-1 ~lxc launch ubuntu:lxd-ct+root@vm-1:~lxc launch ubuntu:lxd-ct
 </​code>​ </​code>​
  
-The ''​x''​ in ''​ubuntu:​x''​ is the shortcut for ''​xenial ​ubuntu'' ​while ''​ubuntu:''​ is the remote that we want to download the image from. Because this is the first time we launch a container using this image, it will take a while to download the rootfs for the container on the host.+The ''​f'' ​(extracted from the ''​ALIAS''​ column) ​in ''​ubuntu:​f''​ is the shortcut for ''​focal ubuntu'' ​(version 20.04 is codenamed Focal Fossa). ​''​ubuntu:''​ is the remote that we want to download the image from. Because this is the first time we launch a container using this image, it will take a while to download the rootfs for the container on the host.
  
 <​note>​ <​note>​
 As an alternative:​ As an alternative:​
 <code bash> <code bash>
-[root@vm-1 ~lxc launch images:​alpine/​3.lxd-ct+root@vm-1:~lxc launch images:​alpine/​3.11 lxd-ct
 </​code>​ </​code>​
 </​note>​ </​note>​
Line 439: Line 460:
 Running ''​lxc list''​ we can see that now we have a container running: Running ''​lxc list''​ we can see that now we have a container running:
 <code bash> <code bash>
-[root@vm-1 ~lxc list +root@vm-1:~lxc list 
-+--------+---------+-------------------+------+------------+-----------+ ++--------+---------+------------------+----------------------------------------------+------------+-----------+ 
-|  NAME  |  STATE  |       ​IPV4 ​       | IPV6 |    TYPE    | SNAPSHOTS | +|  NAME  |  STATE  |       ​IPV4 ​      ​                    ​IPV6                     ​|    TYPE    | SNAPSHOTS | 
-+--------+---------+-------------------+------+------------+-----------+ ++--------+---------+------------------+----------------------------------------------+------------+-----------+ 
-| lxd-ct | RUNNING | 12.0.0.204 (eth0) |      | PERSISTENT | 0         | +| lxd-ct | RUNNING | 12.0.0.93 (eth0) | fd42:​89a:​615d:​8d24:​216:​3eff:​fea6:​92f2 (eth0) ​| PERSISTENT | 0         | 
-+--------+---------+-------------------+------+------------+-----------+++--------+---------+------------------+----------------------------------------------+------------+-----------
 </​code>​ </​code>​
  
 Let's connect to the ''​lxd-ct''​ as the preconfigured user ''​ubuntu'':​ Let's connect to the ''​lxd-ct''​ as the preconfigured user ''​ubuntu'':​
 <code bash> <code bash>
-[root@vm-1 ~lxc exec lxd-ct -- sudo --login --user ubuntu+root@vm-1:~lxc exec lxd-ct -- sudo --login --user ubuntu
 </​code>​ </​code>​
  
Line 455: Line 476:
 As an alternative:​ As an alternative:​
 <code bash> <code bash>
-[root@vm-1 ~lxc exec lxd-ct -- /bin/sh+root@vm-1:~lxc exec lxd-ct -- /bin/sh
 </​code>​ </​code>​
 </​note>​ </​note>​
Line 470: Line 491:
 To quit the container shell, a simple ''​CTRL - D''​ is enough. As a final step, let's stop our LXD container: To quit the container shell, a simple ''​CTRL - D''​ is enough. As a final step, let's stop our LXD container:
 <code bash> <code bash>
-[root@vm-1 ~lxc stop lxd-ct +root@vm-1:~lxc stop lxd-ct 
-[root@vm-1 ~lxc list+root@vm-1:~lxc list
 +--------+---------+------+------+------------+-----------+ +--------+---------+------+------+------------+-----------+
 |  NAME  |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS | |  NAME  |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
Line 480: Line 501:
  
  
 +==== 9. [Docker] Basic container interaction ====
  
-==== 9. [15p] [Docker] ​Basic container interaction ====+While system containers are designed to run multiple processes and services, ''​application containers''​ such as Docker are designed to package and run a single serviceDocker also uses image servers for hosting already built images. Check out [[https://​hub.docker.com/​explore/​|Docker ​Hub]] for both official images and also user uploaded ones.
  
-While system containers are designed to run multiple processes and services, ''​application containers''​ such as Docker are designed to package and run a single service. Docker also uses image servers for hosting already built images. Check out [[https://​hub.docker.com/​explore/​|Docker Hub]] for both official images and also user uploaded ones.  +The KVM machine ''​vm-2''​ already has Docker installed. Let's start by logging into the KVM machine:
- +
-The KVM machine ''​vm-2''​ already has Docker installed ​using the [[https://​docs.docker.com/​install/​linux/​docker-ce/​ubuntu/​|official guide]]. Let's start by logging into the KVM machine:+
  
 <code bash> <code bash>
-[student@saisp ~$ ssh root@10.0.0.2+student@scgc:~/scgc$ ssh student@10.0.0.2
 </​code>​ </​code>​
  
-Let's check if any Docker container is running on this host or if there is any cointainer ​images on the system:+Let's check if any Docker container is running on this host or if there is any container ​images on the system:
  
 <code bash> <code bash>
Line 500: Line 520:
 </​code>​ </​code>​
  
-Since there are no containers,​let'​s search for the ''​alpine''​ image, one of the smallest Linux Distributions,​ on the Docker Hub:+Since there are no containers, let's search for the ''​alpine''​ image, one of the smallest Linux Distributions,​ on the Docker Hub:
 <code bash> <code bash>
-[root@vm-2 ~docker search alpine+root@vm-2:~docker search alpine
 NAME                                   ​DESCRIPTION ​                                    ​STARS ​              ​OFFICIAL ​           AUTOMATED NAME                                   ​DESCRIPTION ​                                    ​STARS ​              ​OFFICIAL ​           AUTOMATED
-alpine ​                                A minimal Docker image based on Alpine Linux… ​  3328                ​[OK] ​               +alpine ​                                A minimal Docker image based on Alpine Linux… ​  6418                [OK]
 ... ...
 </​code>​ </​code>​
Line 510: Line 530:
 The command will output any image that has ''​alpine''​ in it's name. The first one is the official alpine Docker image. Let's use it to start a container interactively:​ The command will output any image that has ''​alpine''​ in it's name. The first one is the official alpine Docker image. Let's use it to start a container interactively:​
 <code bash> <code bash>
-[root@vm-2 ~docker run -it alpine /bin/sh +root@vm-2:~docker run -it alpine /bin/sh 
-/ # +/ #
 / # ps aux / # ps aux
 PID   ​USER ​    ​TIME ​  ​COMMAND PID   ​USER ​    ​TIME ​  ​COMMAND
     1 root       0:01 /bin/sh     1 root       0:01 /bin/sh
     5 root       0:00 ps aux     5 root       0:00 ps aux
-/ # +/ #
 </​code>​ </​code>​
-In a similar fashion to LXD containers, Docker will first download the image locally and then start a container in which the only process is the ''/​bin/​sh''​ invoked terminal. We can exit the container with ''​CTRL - D''​. 
  
-If we check the container list, we can see the previously created container and its ID (cb62c4e3a0da) and name (condescending_pare):+In a fashion similar to LXD containers, Docker will first download the image locally and then start a container in which the only process is the ''/​bin/​sh''​ invoked terminal. We can exit the container with ''​CTRL - D''​. 
 + 
 +If we check the container list, we can see the previously created container and its ID (f3b608d7cc4c) and name (vigorous_varahamihira):
 <code bash> <code bash>
-[root@vm-2 ~docker ps -a +root@vm-2:~docker ps -a 
-CONTAINER ID        IMAGE               ​COMMAND ​                 CREATED ​             STATUS ​                     PORTS               ​NAMES +CONTAINER ID        IMAGE               ​COMMAND ​            ​CREATED ​            ​STATUS ​                     PORTS               ​NAMES 
-cb62c4e3a0da ​       ​alpine ​             "/​bin/​sh" ​               About a minute ​ago   ​Exited (0) 44 seconds ago                       condescending_pare+f3b608d7cc4c ​       ​alpine ​             "/​bin/​sh" ​          5 minutes ​ago       ​Exited (0) 11 seconds ago                       vigorous_varahamihira
 </​code>​ </​code>​
  
 We can always reuse the same container, start it again and then attaching to its terminal or simply running a command in it. Below you can see some of the possible commands: We can always reuse the same container, start it again and then attaching to its terminal or simply running a command in it. Below you can see some of the possible commands:
 <code bash> <code bash>
-[root@vm-2 ~docker start cb62c4e3a0da ​# start the container with ID cb62c4e3a0da +root@vm-2:~docker start vigorous_varahamihira ​# start the container with the name vigorous_varahamihira 
-[root@vm-2 ~docker exec cb62c4e3a0da ​cat /​etc/​os-release ​  ​execute ​a command inside ​container ​without attaching to its terminal+vigorous_varahamihira 
 +root@vm-2:~docker exec vigorous_varahamihira ​cat /​etc/​os-release # run a command inside ​the container
 NAME="​Alpine Linux" NAME="​Alpine Linux"
 ID=alpine ID=alpine
-VERSION_ID=3.7.0 +VERSION_ID=3.11.6 
-PRETTY_NAME="​Alpine Linux v3.7+PRETTY_NAME="​Alpine Linux v3.11
-HOME_URL="​http://​alpinelinux.org"​ +HOME_URL="​https://​alpinelinux.org/
-BUG_REPORT_URL="​http://​bugs.alpinelinux.org"​+BUG_REPORT_URL="​https://​bugs.alpinelinux.org/"
 </​code>​ </​code>​
  
-==== 10. [15p] [Docker] Dockerfile ​ ====+==== 10. [Docker] Dockerfile ​ ====
  
 The goal now is to //​dockerize//​ and run a simple Node.js application that exposes a simple REST API. On the ''​vm-2''​ node, in the ''/​root/​messageApp''​ path you can find the sources of the application and also a ''​Dockerfile''​. The goal now is to //​dockerize//​ and run a simple Node.js application that exposes a simple REST API. On the ''​vm-2''​ node, in the ''/​root/​messageApp''​ path you can find the sources of the application and also a ''​Dockerfile''​.
Line 563: Line 585:
 The actions performed by the Dockerfile are the following: The actions performed by the Dockerfile are the following:
   * ''​FROM node:​4.4.5''​ - starts from the [[https://​hub.docker.com/​r/​library/​node/​tags/​|node]] base image   * ''​FROM node:​4.4.5''​ - starts from the [[https://​hub.docker.com/​r/​library/​node/​tags/​|node]] base image
-  * ''​COPY . /​app''​ - copies the sources from the ''​vm-1''​ host cuurent ​directory (''​messageApp''​) to the ''/​app''​ folder in the container +  * ''​COPY . /​app''​ - copies the sources from the ''​vm-1''​ host current ​directory (''​messageApp''​) to the ''/​app''​ folder in the container 
-  * ''​RUN npm install''​ - installs all dependecies+  * ''​RUN npm install''​ - installs all dependencies
   * ''​EXPOSE 80''​ - exposes port 80 to the host machine   * ''​EXPOSE 80''​ - exposes port 80 to the host machine
   * ''​CMD ["​npm","​start"​]''​ - launches the Node.js application   * ''​CMD ["​npm","​start"​]''​ - launches the Node.js application
Line 570: Line 592:
 To build the application image we would issue a command like the following: To build the application image we would issue a command like the following:
 <code bash> <code bash>
-[root@vm-2 ~/​messageAppdocker build -t message-app .+root@vm-2:~/​messageAppdocker build -t message-app .
 </​code>​ </​code>​
 +
 The ''​-t''​ parameter is used to specifies the name of the new image while the dot at the end of the command specifies where to find the ''​Dockerfile'',​ in our case - the currect directory. The ''​-t''​ parameter is used to specifies the name of the new image while the dot at the end of the command specifies where to find the ''​Dockerfile'',​ in our case - the currect directory.
  
-<note important>​Building the image in this environment will take a great amount of time. If you are planning to also complete the last step (11 - +<note important>​ 
- Bonus) please download an already built image from DockerHub using the command below:+Building the image in this environment will take a great amount of time. If you are planning to also complete the last step (11 - Bonus) please download an already built image from DockerHub using the command below: 
 <code bash> <code bash>
-[root@vm-2 ~docker pull ioanaciornei/​message-app:​4 +root@vm-2:~docker pull ioanaciornei/​message-app:​4 
-</​code></​note>​+</​code>​ 
 +</​note>​
  
 While the ''​message-app''​ image is building or downloading (depending on what you chose) you can start to complete the next step. While the ''​message-app''​ image is building or downloading (depending on what you chose) you can start to complete the next step.
-==== 11. [10p] [BONUS] [Docker] Docker Swarm ====+ 
 +==== 11. [BONUS] [Docker] Docker Swarm ====
  
 Docker can be used in **swarm mode** to natively manage a cluster of Docker Hosts that can each run multiple containers. You can read more on the possibilities when using swarm on the [[https://​docs.docker.com/​engine/​swarm/​|official documentation]]. Docker can be used in **swarm mode** to natively manage a cluster of Docker Hosts that can each run multiple containers. You can read more on the possibilities when using swarm on the [[https://​docs.docker.com/​engine/​swarm/​|official documentation]].
  
-The KVM machines ''​vm-1''​ and ''​vm-2''​ already create ​cluster from Docker'​s standpointWhile ''​vm-2''​ is the ''​manager node'',​ ''​vm-1''​ is just a cluster node. You can check the cluster organization using ''​docker node'':​+Crete new swarm following the official [[https://​docs.docker.com/​engine/​reference/​commandline/​swarm/​|documentation]],​ where ''​vm-2''​ is the ''​manager node'', ​and ''​vm-1''​ is a cluster node. You can check the cluster organization using ''​docker node ls'':​ 
 <code bash> <code bash>
-[root@vm-2 ~# docker node ls +root@vm-2:~# docker node ls 
-ID                            HOSTNAME ​           STATUS ​             AVAILABILITY ​       MANAGER STATUS +ID                            HOSTNAME ​           STATUS ​             AVAILABILITY ​       MANAGER STATUS ​     ​ENGINE VERSION 
-sf7ch7hwr0hxr2r1fy9buco4o ​    vm-1                Ready               ​Active ​              +t37w45fqp5kdlttc4gz7xibb8 ​    vm-1                Ready               ​Active ​                                 ​18.09.1 
-9p0di4ilnoi33akaeds5nar54 ​*   ​vm-2 ​               Ready               ​Active ​             Leader+j43vq230skobhaokrc6thznt1 ​*   ​vm-2 ​               Ready               ​Active ​             Leader ​             ​18.09.1
 </​code>​ </​code>​
  
-In order enable connectivity between containers running on both machines, we need to create an ''​overlay network'' ​- a multi-host network from the manager node - ''​vm-2''​:+In order enable connectivity between containers running on both machines, we need to create an [[https://​docs.docker.com/​network/​overlay/​|attachable ​overlay network]] - a multi-host network from the manager node - called ''​appnet''​ on ''​vm-2''​. You can check the configuration using ''​docker network ls''​ and ''​docker network inspect''​.
 <code bash> <code bash>
-[root@vm-2 ~] docker network create --attachable -d overlay appnet +root@vm-2:~docker network ls
-btv9y67io40eiqf8r39v13sf3 +
-[root@vm-2 ~] docker network ls+
 NETWORK ID          NAME                DRIVER ​             SCOPE NETWORK ID          NAME                DRIVER ​             SCOPE
-btv9y67io40e ​       ​appnet ​             overlay ​            ​swarm +mi2h6quzc7kt ​       ​appnet ​             overlay ​            ​swarm 
-...+85865eb2036a ​       bridge ​             bridge ​             local 
 +bff38e0689a8 ​       docker_gwbridge ​    ​bridge ​             local 
 +0f082dff9018 ​       host                host                local 
 +to0vd04fsv8q ​       ingress ​            ​overlay ​            ​swarm 
 +e8bce560fd6f ​       none                null                local
 </​code>​ </​code>​
  
Line 606: Line 635:
  
 <code bash> <code bash>
-[root@vm-1 ~docker run -d --name mongo --net=appnet mongo:3.2 +root@vm-1:~docker run -d --name mongo --net=appnet mongo 
-[root@vm-1 ~docker ps+root@vm-1:~docker ps
 CONTAINER ID        IMAGE               ​COMMAND ​                 CREATED ​            ​STATUS ​             PORTS               NAMES CONTAINER ID        IMAGE               ​COMMAND ​                 CREATED ​            ​STATUS ​             PORTS               NAMES
-f594d1f22208 ​       mongo:3.2           "​docker-entrypoint.s…" ​  3 minutes ​ago       Up About a minute ​  27017/​tcp ​          mongo+849b15ac13d6 ​       ​mongo ​              ​"​docker-entrypoint.s…" ​  21 seconds ​ago      Up 15 seconds ​      27017/​tcp ​          mongo
 </​code>​ </​code>​
  
 Now that we have a mongodb container up and running on the ''​appnet''​ multi-host network, let's test the connectivity by starting a container on ''​vm-2'':​ Now that we have a mongodb container up and running on the ''​appnet''​ multi-host network, let's test the connectivity by starting a container on ''​vm-2'':​
- 
 <code bash> <code bash>
-[root@vm-2 ~docker run -ti --name box --net=appnet alpine sh+root@vm-2:~docker run -ti --name box --net=appnet alpine sh
 / # ping mongo / # ping mongo
-PING mongo (10.0.0.4): 56 data bytes +PING mongo (10.0.1.2): 56 data bytes 
-64 bytes from 10.0.0.4: seq=0 ttl=64 time=82.344 ms +64 bytes from 10.0.1.2: seq=0 ttl=64 time=1.150 ms 
-64 bytes from 10.0.0.4: seq=1 ttl=64 time=13.845 ms +64 bytes from 10.0.1.2: seq=1 ttl=64 time=1.746 ms
-</​code>​+
  
 +# If the name is not registered, you can find the mongo container'​s IPv4 address using the '​inspect'​ command.
 +# We are interested in the "​IPv4Address"​ field under Networks.appnet
 +root@vm-2:​~#​ docker inspect mongo
 +...
 +"​NetworkSettings":​ {
 +    ...
 +    "​Networks":​ {
 +        "​appnet":​ {
 +            "​IPAMConfig":​ {
 +                "​IPv4Address":​ "​10.0.1.2"​
 +            },
 +...
 +
 +root@vm-2:​~#​
 +</​code>​
  
 Finally, start a container using the ''​message-app''​ image: Finally, start a container using the ''​message-app''​ image:
  
 <code bash> <code bash>
-[root@vm-2 ~docker run -d --net=appnet ioanaciornei/​message-app:​4+root@vm-2:~docker run -d --net=appnet ioanaciornei/​message-app:​4
 </​code>​ </​code>​
  
 It may take a while for the container to start. You can check the logs using the ''​docker logs''​ command: It may take a while for the container to start. You can check the logs using the ''​docker logs''​ command:
 <code bash> <code bash>
-[root@vm-2 ~docker logs <​container-id>​+root@vm-2:~docker logs <​container-id>​
 </​code>​ </​code>​
  
- +You can check that the Node.js application is running and that it has access to the mongo container as follows:
-You can check that the Node.js application is running and that it has access to the mongodb ​container as follows:+
 <code bash> <code bash>
-[root@vm-2 ~curl http://​172.18.0.4:​1337/​message ​                                                                                                                                                                                          ​ +root@vm-2:~curl http://​172.18.0.4:​1337/​message 
-[][root@vm-2 ~ +root@vm-2:~# 
-[root@vm-2 ~curl -XPOST http://​172.18.0.4:​1337/​message?​text=finally-done+root@vm-2:~curl -XPOST http://​172.18.0.4:​1337/​message?​text=finally-done
 { {
   "​text":​ "​finally-done",​   "​text":​ "​finally-done",​
-  "​createdAt":​ "2018-03-20T17:09:00.933Z", +  "​createdAt":​ "2020-05-06T16:25:37.477Z", 
-  "​updatedAt":​ "2018-03-20T17:09:00.933Z", +  "​updatedAt":​ "2020-05-06T16:25:37.477Z", 
-  "​id":​ "5ab1402df90c5b10009f86bd+  "​id":​ "5eb2e501f9582c1100585129
-}[root@vm-2 ~curl http://​172.18.0.4:​1337/​message+} 
 +root@vm-2:~curl http://​172.18.0.4:​1337/​message
 [ [
   {   {
     "​text":​ "​finally-done",​     "​text":​ "​finally-done",​
-    "​createdAt":​ "2018-03-20T17:09:00.933Z", +    "​createdAt":​ "2020-05-06T16:25:37.477Z", 
-    "​updatedAt":​ "2018-03-20T17:09:00.933Z", +    "​updatedAt":​ "2020-05-06T16:25:37.477Z", 
-    "​id":​ "5ab1402df90c5b10009f86bd"+    "​id":​ "5eb2e501f9582c1100585129"
   }   }
 +]
 </​code>​ </​code>​
 +
scgc/laboratoare/04.1540393774.txt.gz · Last modified: 2018/10/24 18:09 by alexandru.carp
CC Attribution-Share Alike 3.0 Unported
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0