Differences

This shows you the differences between two versions of the page.

Link to this comparison view

scgc:laboratoare:04 [2020/11/04 17:53]
darius.mihai [7. [15p] [LXD] Intro]
scgc:laboratoare:04 [2021/10/20 19:09] (current)
alexandru.carp [4. [LXC] Process hierarchy]
Line 1: Line 1:
-===== Laboratory 04. Container based virtualization =====+===== Container based virtualization =====
  
 ===== Lab Setup ===== ===== Lab Setup =====
  
-  * We will be using a virtual machine in the [[http://​cloud.curs.pub.ro/​|faculty'​s cloud]]. +  * We will be using a virtual machine in the [[http://​cloud.grid.pub.ro/​|faculty'​s cloud]].
-  * When creating a virtual machine follow the steps in this [[https://​cloud.curs.pub.ro/​about/​tutorial-for-students/​|tutorial]].+
   * When creating a virtual machine in the Launch Instance window:   * When creating a virtual machine in the Launch Instance window:
     * Select **Boot from image** in **Instance Boot Source** section     * Select **Boot from image** in **Instance Boot Source** section
Line 25: Line 24:
   * In order to connect to each of the machines, use the following command (substitute X with 1, 2):   * In order to connect to each of the machines, use the following command (substitute X with 1, 2):
 <code bash> <code bash>
-student@scgc:​~/​scgc$ ssh student@10.0.0.1+student@scgc:​~/​scgc$ ssh student@10.0.0.X
 </​code>​ </​code>​
   * The password for both ''​student''​ and ''​root''​ users is ''​student''​   * The password for both ''​student''​ and ''​root''​ users is ''​student''​
Line 38: Line 37:
 ===== Tasks ====== ===== Tasks ======
  
-==== 1. [5p] [LXC] Check for LXC support ====+==== 1. [LXC] Check for LXC support ====
  
 LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel. The support is integrated in the mainline kernel starting with version 2.6.29. That means any Linux kernel, starting with that version, if properly configured can support LXC containers. LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel. The support is integrated in the mainline kernel starting with version 2.6.29. That means any Linux kernel, starting with that version, if properly configured can support LXC containers.
Line 68: Line 67:
  
  
-==== 2. [5p] [LXC] Create basic containers ====+==== 2. [LXC] Create basic containers ====
  
 The ''​lxc-create''​ tool is used in order to facilitate the creation of a LXC container. Upon issuing a ''​lxc-create''​ command the following actions are made: The ''​lxc-create''​ tool is used in order to facilitate the creation of a LXC container. Upon issuing a ''​lxc-create''​ command the following actions are made:
Line 78: Line 77:
 lxc-create -n NAME -t TEMPLATE lxc-create -n NAME -t TEMPLATE
 </​code>​ </​code>​
 +
 +<spoiler Alpine container create error fix>
 +If the command fails with the following error (reported [[https://​gitlab.alpinelinux.org/​alpine/​aports/​-/​issues/​12326|here]]):​
 +<​code>​
 +lxc-create: ct1: lxccontainer.c:​ create_run_template:​ 1617 Failed to create container from template
 +lxc-create: ct1: tools/​lxc_create.c:​ main: 327 Failed to create container ct1
 +</​code>​
 +
 +You will need to apply this patch to the file:
 +<code diff lxc-alpine.patch>​
 +@@ -281 +281 @@
 +- mknod -m 666 dev/zero c 1 5
 ++ mknod -m 666 dev/zero c 1 5 || true
 +@@ -283,2 +283,2 @@
 +- mknod -m 666 dev/random c 1 8
 +- mknod -m 666 dev/urandom c 1 9
 ++ mknod -m 666 dev/random c 1 8 || true
 ++ mknod -m 666 dev/urandom c 1 9 || true
 +@@ -293 +293 @@
 +- mknod -m 620 dev/console c 5 1
 ++ mknod -m 620 dev/console c 5 1 || true
 +</​code>​
 +
 +To apply the patch, use 
 +<code bash>
 +patch /​usr/​share/​lxc/​templates/​lxc-alpine < lxc-alpine.patch
 +</​code>​
 +</​spoiler>​
 +\\
 +
 Some of the values for ''​TEMPLATE''​ are: ''​alpine'',​ ''​ubuntu'',​ ''​busybox'',​ ''​sshd'',​ ''​debian'',​ or ''​fedora''​ and specifies the template script that will be employed when creating the ''​rootfs''​. All the available template scripts are available in the following location: ''/​usr/​share/​lxc/​templates/''​. Some of the values for ''​TEMPLATE''​ are: ''​alpine'',​ ''​ubuntu'',​ ''​busybox'',​ ''​sshd'',​ ''​debian'',​ or ''​fedora''​ and specifies the template script that will be employed when creating the ''​rootfs''​. All the available template scripts are available in the following location: ''/​usr/​share/​lxc/​templates/''​.
  
Line 89: Line 118:
 </​code>​ </​code>​
  
-==== 3. [5p] [LXC] Basic interaction ====+==== 3. [LXC] Basic interaction ====
  
 To see that the '​ct1'​ container has been created created, on ''​vm-1''​ run: To see that the '​ct1'​ container has been created created, on ''​vm-1''​ run:
Line 99: Line 128:
 Start the container by issuing the following command: Start the container by issuing the following command:
 <code bash> <code bash>
-root@vm-1:​~#​ lxc-start -n ct1 -F 
 root@vm-1:​~#​ lxc-start -n ct1 -F root@vm-1:​~#​ lxc-start -n ct1 -F
  
Line 171: Line 199:
 We can disconnect from the container'​s console, without stopping it, using the **CTRL+A, Q** key combination. We can disconnect from the container'​s console, without stopping it, using the **CTRL+A, Q** key combination.
  
-==== 4. [5p] [LXC] Process hierarchy ====+==== 4. [LXC] Process hierarchy ====
  
 Using the ''​lxc-info''​ command, find out the ''​ct1''​ PID which will correspond to the container'​s init process. Any other process running in the container will be child processes of this ''​init''​. Using the ''​lxc-info''​ command, find out the ''​ct1''​ PID which will correspond to the container'​s init process. Any other process running in the container will be child processes of this ''​init''​.
Line 194: Line 222:
 <code bash> <code bash>
 # Install pstree # Install pstree
 +root@vm-1:​~#​ apt update
 root@vm-1:​~#​ apt install psmisc root@vm-1:​~#​ apt install psmisc
 root@vm-1:​~#​ pstree --ascii -s -c -p 1977 root@vm-1:​~#​ pstree --ascii -s -c -p 1977
Line 225: Line 254:
 Even though the same processes can be observed from within or outside of the container, the process PIDs are different. This is because the operating system translates the process space for each container. Even though the same processes can be observed from within or outside of the container, the process PIDs are different. This is because the operating system translates the process space for each container.
  
-==== 5. [5p] [LXC] Filesystem ====+==== 5. [LXC] Filesystem ====
  
 LXC containers have their filesystem stored in the host machine under the following path: ''/​var/​lib/​lxc/<​container-name>/​rootfs/''​. LXC containers have their filesystem stored in the host machine under the following path: ''/​var/​lib/​lxc/<​container-name>/​rootfs/''​.
Line 234: Line 263:
   * Verify that the changes are also visible from the container.   * Verify that the changes are also visible from the container.
  
-==== 6. [10p] [LXC] Networking ====+==== 6. [LXC] Networking ====
  
 By default, our LXC containers are not connected to the exterior. By default, our LXC containers are not connected to the exterior.
Line 280: Line 309:
 And also between the containers: And also between the containers:
 <code bash> <code bash>
-ct1:~# ping -c 172.17.0.12+ct1:~# ping -c 172.17.0.12
 </​code>​ </​code>​
  
Line 300: Line 329:
 If the Linux container cannot ping ''​www.google.com'',​ but is able to ping ''​1.1.1.1'',​ a valid DNS resolver was likely not configured. If the Linux container cannot ping ''​www.google.com'',​ but is able to ping ''​1.1.1.1'',​ a valid DNS resolver was likely not configured.
  
-To fix this, run ''​echo "​nameserver 1.1.1.1"​ > /​etc/​resolv.conf''​ to use Cloudflare'​s DNS resolver.+To fix this, run ''​%%echo "​nameserver 1.1.1.1"​ > /​etc/​resolv.conf%%''​ to use Cloudflare'​s DNS resolver.
 </​note>​ </​note>​
  
-==== 7. [15p] [LXD] Intro  ====+==== 7. [LXD] Intro  ====
  
 LXD is a next generation **system container** manager. It offers a user experience similar to virtual machines but using Linux containers instead. System containers are designed to run multiple processes and services and for all practical purposes. You can think of OS containers as VMs, where you can run multiple processes, install packages etc. LXD is a next generation **system container** manager. It offers a user experience similar to virtual machines but using Linux containers instead. System containers are designed to run multiple processes and services and for all practical purposes. You can think of OS containers as VMs, where you can run multiple processes, install packages etc.
Line 318: Line 347:
 The LXD initialization process is can be started using ''​lxd init'':​ The LXD initialization process is can be started using ''​lxd init'':​
 <code bash> <code bash>
-root@vm-1:# lxd init+root@vm-1:~# lxd init
 </​code>​ </​code>​
  
Line 372: Line 401:
 </​note>​ </​note>​
  
-==== 8. [20p] [LXD] Start a system container ​ ====+==== 8. [LXD] Start a system container ​ ====
  
 LXD uses multiple remote image servers. To list the default remotes we can use ''​lxc remote'':​ LXD uses multiple remote image servers. To list the default remotes we can use ''​lxc remote'':​
Line 417: Line 446:
 Now that we have chosen the container image, let's start a container named ''​lxd-ct'':​ Now that we have chosen the container image, let's start a container named ''​lxd-ct'':​
 <code bash> <code bash>
-root@vm-1:# lxc launch ubuntu:f lxd-ct+root@vm-1:~# lxc launch ubuntu:f lxd-ct
 </​code>​ </​code>​
  
Line 425: Line 454:
 As an alternative:​ As an alternative:​
 <code bash> <code bash>
-root@vm-1:# lxc launch images:​alpine/​3.11 lxd-ct+root@vm-1:~# lxc launch images:​alpine/​3.11 lxd-ct
 </​code>​ </​code>​
 </​note>​ </​note>​
Line 441: Line 470:
 Let's connect to the ''​lxd-ct''​ as the preconfigured user ''​ubuntu'':​ Let's connect to the ''​lxd-ct''​ as the preconfigured user ''​ubuntu'':​
 <code bash> <code bash>
-root@vm-1:# lxc exec lxd-ct -- sudo --login --user ubuntu+root@vm-1:~# lxc exec lxd-ct -- sudo --login --user ubuntu
 </​code>​ </​code>​
  
Line 447: Line 476:
 As an alternative:​ As an alternative:​
 <code bash> <code bash>
-root@vm-1:# lxc exec lxd-ct -- /bin/sh+root@vm-1:~# lxc exec lxd-ct -- /bin/sh
 </​code>​ </​code>​
 </​note>​ </​note>​
Line 462: Line 491:
 To quit the container shell, a simple ''​CTRL - D''​ is enough. As a final step, let's stop our LXD container: To quit the container shell, a simple ''​CTRL - D''​ is enough. As a final step, let's stop our LXD container:
 <code bash> <code bash>
-root@vm-1:# lxc stop lxd-ct +root@vm-1:~# lxc stop lxd-ct 
-root@vm-1:# lxc list+root@vm-1:~# lxc list
 +--------+---------+------+------+------------+-----------+ +--------+---------+------+------+------------+-----------+
 |  NAME  |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS | |  NAME  |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
Line 472: Line 501:
  
  
-==== 9. [15p] [Docker] Basic container interaction ====+==== 9. [Docker] Basic container interaction ====
  
 While system containers are designed to run multiple processes and services, ''​application containers''​ such as Docker are designed to package and run a single service. Docker also uses image servers for hosting already built images. Check out [[https://​hub.docker.com/​explore/​|Docker Hub]] for both official images and also user uploaded ones. While system containers are designed to run multiple processes and services, ''​application containers''​ such as Docker are designed to package and run a single service. Docker also uses image servers for hosting already built images. Check out [[https://​hub.docker.com/​explore/​|Docker Hub]] for both official images and also user uploaded ones.
Line 532: Line 561:
 </​code>​ </​code>​
  
-==== 10. [15p] [Docker] Dockerfile ​ ====+==== 10. [Docker] Dockerfile ​ ====
  
 The goal now is to //​dockerize//​ and run a simple Node.js application that exposes a simple REST API. On the ''​vm-2''​ node, in the ''/​root/​messageApp''​ path you can find the sources of the application and also a ''​Dockerfile''​. The goal now is to //​dockerize//​ and run a simple Node.js application that exposes a simple REST API. On the ''​vm-2''​ node, in the ''/​root/​messageApp''​ path you can find the sources of the application and also a ''​Dockerfile''​.
Line 578: Line 607:
 While the ''​message-app''​ image is building or downloading (depending on what you chose) you can start to complete the next step. While the ''​message-app''​ image is building or downloading (depending on what you chose) you can start to complete the next step.
  
-==== 11. [10p] [BONUS] [Docker] Docker Swarm ====+==== 11. [BONUS] [Docker] Docker Swarm ====
  
 Docker can be used in **swarm mode** to natively manage a cluster of Docker Hosts that can each run multiple containers. You can read more on the possibilities when using swarm on the [[https://​docs.docker.com/​engine/​swarm/​|official documentation]]. Docker can be used in **swarm mode** to natively manage a cluster of Docker Hosts that can each run multiple containers. You can read more on the possibilities when using swarm on the [[https://​docs.docker.com/​engine/​swarm/​|official documentation]].
Line 606: Line 635:
  
 <code bash> <code bash>
-root@vm-1:# docker run -d --name mongo --net=appnet mongo +root@vm-1:~# docker run -d --name mongo --net=appnet mongo 
-root@vm-1:# docker ps+root@vm-1:~# docker ps
 CONTAINER ID        IMAGE               ​COMMAND ​                 CREATED ​            ​STATUS ​             PORTS               NAMES CONTAINER ID        IMAGE               ​COMMAND ​                 CREATED ​            ​STATUS ​             PORTS               NAMES
 849b15ac13d6 ​       mongo               "​docker-entrypoint.s…" ​  21 seconds ago      Up 15 seconds ​      ​27017/​tcp ​          mongo 849b15ac13d6 ​       mongo               "​docker-entrypoint.s…" ​  21 seconds ago      Up 15 seconds ​      ​27017/​tcp ​          mongo
scgc/laboratoare/04.1604505218.txt.gz · Last modified: 2020/11/04 17:53 by darius.mihai
CC Attribution-Share Alike 3.0 Unported
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0