Differences

This shows you the differences between two versions of the page.

Link to this comparison view

ii:labs:s2:04 [2022/05/18 20:06]
radu.ciobanu
ii:labs:s2:04 [2024/04/11 13:11] (current)
radu.ciobanu
Line 1: Line 1:
-~~NOTOC~~ +====== Lab 04 - Docker ======
- +
-====== Lab 04 - Docker ​Compose ​======+
  
 ===== Objectives ===== ===== Objectives =====
  
-  * Understand ​the YAML file syntax +  * Understand ​what a software ​container ​is 
-  * Learn how to deploy multi-container ​applications using Docker Compose +  * Get familiar with the Docker environment 
-  * Get familiar with Postman +  * Learn how to build, publish, and deploy containers
-  * Learn how to interact with a database+
  
 ===== Contents ===== ===== Contents =====
Line 17: Line 14:
 ===== Introduction ===== ===== Introduction =====
  
-Normallyin order to run containers ​we need to execute the corresponding ​run command (**//docker run//**) and set all the necessary parametersThis process can be difficult ​and repetitive when we need to start multiple containers. One way to "​save" ​the running configuration is to create scripts. The problem with running multiple scripts is the loss of uniformity in configuration (which container connects to which network, who communicates with whom, etc.).+Docker is a software container platform used for packaging and running applications both locally and on cloud systemseliminating problems such as "it works on my computer"​. Docker can therefore be seen as an environment that allows ​containers to run on any platform, and it is based on **//containerd//**. As a benefit, it provides faster compilation,​ testing, deployment, updating, ​and error recovery than the standard application deployment mode.
  
-[[https://​docs.docker.com/​compose/​|Docker ​Compose]] is utility created by Docker that is used to centralise ​the configuration process ​of a container-based application in a declarative mannerusing Yet Another Markup Language (YAML) configuration files.+Docker ​provides ​uniform development and production environment,​ where the compatibility ​of applications with the operating system is no longer ​problemand there are no more conflicts between the library/​package versions on the host system. Containers are ephemeral, so an error or failure in one of them does not cause the entire system to crash. They help ensure strict consistency between the development environment and the production environment.
  
-Moreover, the format for Compose files is also used in **//Docker Swarm//**, the orchestrator created by Docker ​for managing ​Docker ​services, which we will discuss in [[ii:​labs:​s2:​05|lab 5]].+Docker ​also offers maximum flexibility. If, in a large projectwe need new software tools because certain requirements change, we can pack them in containers and then link them very easily to the system. If we need to replicate the infrastructure to another medium, we can reuse Docker ​images saved in the registry (a kind of container repository). If we need to update certain components, ​Docker ​allows us to rewrite images, which means that the latest container versions ​will always be released.
  
-<note tip>You will notice that in this lab we use the terms //**service**// and //**container**// interchangeablyThis is because Docker Swarm works with serviceswhile Docker ​Compose works with containersWe refer to both terms in the same context because ​the configuration ​is 90% identicalregardless of whether Swarm or Compose ​is used. </​note>​+<note tip> 
 +Docker is a great work environment. As a matter of fact, most IDEs such as Visual Studio, VSCode, or IntelliJ have built-in support for debugging in Docker either by default or as a plugin. The reason why the most used IDEs offer this support is that Docker images represent a replicable and consistent work environment identical to the production one. 
 +</​note>​ 
 + 
 +===== Images and containers ===== 
 + 
 +Docker containers are based on **//images//**, which are standalone lightweight executable packages that contain everything needed to run software applications,​ including code, runtime, libraries, environment variables, and configuration files. Images vary in size, do not contain full versions of operating systems, and are cached locally or in a registry. A Docker image has a **//union//** file system, where each change to the file system or metadata is considered a layer, with several such layers forming an image. Each layer is uniquely identified (by a hash) and stored only once. 
 + 
 +**//container//​** is an instance of an image, that is, what the image becomes in memory when it is executedIt runs completely isolated from the host environment,​ accessing its files and ports only if it is configured to do so. Containers run native applications on the core of the host machineperforming better than virtual machines, which have access to the host's resources through a hypervisor. Each container runs in a discrete process, requiring as much memory as any other executable. From a file system standpoint, a container is an additional read/write layer over the image'​s layers. 
 + 
 +{{:​ii:​labs:​s2:​vm_2x.png?​direct&​250|}} {{:​ii:​labs:​s2:​container_2x.png?​direct&​250|}} 
 + 
 +In the image above (taken from [[https://​docs.docker.com/​get-started/​|the official ​Docker ​documentation]]),​ virtual machines run "​guest"​ operating systems, which consume a lot of resourcesThe resulting image thus takes up a lot of space, containing operating system settings, dependencies,​ security patches, etc. Instead, containers can share the same kernel, and the only data that must be in a container image is the executable and the packages it depends on, which do not need to be installed on the host system at all. If a virtual machine abstracts hardware resources, a Docker container is a process that abstracts the base on which applications run within an operating system, and isolates operating system software resources (memory, network and file access, etc.). 
 + 
 +===== Docker'​s Architecture ===== 
 + 
 +Docker has a client-server architecture,​ as shown in the image below (taken from [[https://​docs.docker.com/​get-started/​|the official Docker documentation]]). The Docker client communicates via a REST API (over UNIX sockets or over a network interface) with the Docker daemon (server), which is responsible for creatingrunning, and distributing Docker containers. The client and daemon can run on the same system ​or on different systems. A Docker registry ​is used to store images. 
 + 
 +{{:​ii:​labs:​s2:​architecture.png?​direct&​550|}}
  
 ===== Installation ===== ===== Installation =====
  
-For Windows ​and MacOS, ​Docker ​Compose ​is part of the Docker ​Desktop installationFor Linuxthe installation ​is done [[https://​docs.docker.com/​compose/​install/​|according to the official guide]].+Docker is available in two versions: Community Edition (CE) and Enterprise Edition (EE). Docker ​CE is useful for developers and small teams who want to build container-based applications. On the other hand, Docker ​EE was created for enterprise development and IT teams that write and run critical business applications on a large scaleThe Docker CE version is freewhile EE is available with a subscriptionIn this lab, we will use Docker Community EditionDocker is available on both desktop (Windows, macOS) and Cloud (Amazon Web Services, Microsoft Azure) or server (CentOS, Fedora, Ubuntu, Windows Server 2016, etc.) platforms.
  
-===== Key items =====+==== Linux ====
  
-==== YAML file format ====+The commands below are for Ubuntu. For other Linux variants (Debian, CentOS, Fedora), you can find more information on the official Docker documentation page.
  
-[[https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html|YAML]] files are generally used to write declarative configurationsThe format ​is very easy to understand and employ, as follows:+To install Docker CE, you need one of the following versions of UbuntuUbuntu Mantic 23.10, Ubuntu Jammy 22.04 (LTS), Ubuntu Focal 20.04 (LTS). Docker CE has support for the following architectures:​ **//x86_64//**, **//​amd64//​**,​ **//​armhf//​**,​ **//​arm64//​**,​ **//​s390x//​** (IBM Z), and **//ppc64le (ppc64el)//​**. 
 +  
 +The recommended Docker CE installation involves using the official repository, because all the subsequent updates ​are then installed automaticallyWhen installing Docker CE on a machine for the first time, it is necessary ​to initialise the repository:
  
-  * //​**key:​value**//​ elements are used +<code bash> 
-  * indented paragraphs are children properties of the previous paragraphs +$ sudo apt-get update 
-  * lists are delimited by //**-**//.+</code>
  
-==== Docker Compose file example ====+<code bash> 
 +$ sudo apt-get install ca-certificates curl gnupg lsb-release 
 +</​code>​
  
-<​code ​yaml+<​code ​bash
-docker-compose.yml +$ curl -fsSL https://​download.docker.com/​linux/​ubuntu/​gpg | sudo gpg --dearmor -o /​usr/​share/​keyrings/​docker-archive-keyring.gpg 
-version: "​3.8"​+</​code>​
  
-services: +<code bash> 
-    api: +$ echo \ 
-        ​build:​ . # builds the image from a Dockerfile +  "​deb [arch=$(dpkg ​--print-architecture) signed-by=/​usr/​share/​keyrings/​docker-archive-keyring.gpg] https://​download.docker.com/​linux/​ubuntu \ 
-        image: register-image-name:version # uses an image from a registry +  ​$(lsb_release ​-cs) stable" ​| sudo tee /​etc/​apt/​sources.list.d/​docker.list > /dev/null 
-        ​environment:​ +</​code>​
-            ENVIRONMENT_VARIABLE:​ value +
-        ports: +
-            ​- "5000:​80"​ +
-        ​networks:​ +
-            - lab4-network+
  
-    postgres: +Next, Docker CE can be installed:
-        image: postgres:​12 +
-        volumes: +
-            - lab4-volume:/​var/​lib/​postgresql/​data +
-            - ./​scripts/​init-db.sql:/​docker-entrypoint-init.d/​init-db.sql +
-        networks: +
-            - lab4-network+
  
-volumes: +<code bash> 
-    lab4-volume:+$ sudo apt-get update 
 +</​code>​
  
-networks: +<code bash> 
-    lab4-network:+$ sudo apt-get install docker-ce docker-ce-cli containerd.io
 </​code>​ </​code>​
  
-=== Version ===+<note tip> 
 +An easier method of installing Docker CE on Linux is to use [[https://​get.docker.com|the official script]]. 
 +</​note>​
  
-The //​**version**//​ attribute describes what [[https://​docs.docker.com/​compose/​compose-file/​|functionalities]] will be loaded when running the Docker Compose utility.+==== Windows and MacOS ====
  
-<note warning>​You must specify ​the version in any Docker ​Compose file.</note>+Because Docker did not initially have native support for Windows and MacOS, [[https://​docs.docker.com/​toolbox/​overview/​|Docker Toolbox]] was introduced, which can launch a virtualised Docker environment (more specifically,​ it uses a VirtualBox machine as the basis of the Docker ​environment)Recently, Docker Toolbox was marked as "​legacy"​ and was replaced by [[https://​docs.docker.com/​docker-for-mac/​|Docker Desktop for Mac]] and [[https://​docs.docker.com/​docker-for-windows/​|Docker Desktop for Windows]], which offer similar features with better performance. Furthermore,​ Windows Server 2016 and Windows 10 now support native Docker for the **//​x86_64//​** architecture.
  
-=== Services ===+<note tip> 
 +If you do not want to install Docker on your machine, you can use the [[https://​labs.play-with-docker.com|Play with Docker]] virtual environment. 
 +</​note>​
  
-The //​**services**//​ attribute describes ​the services/​containers that will run after the configuration is started by Compose. Each service represents a container that will have the name and configuration of the service. In the example above, the containers will be named //**api**// and //​**postgres**//​. The most important properties of //​**services**//​ are the following:+===== Testing ​the installation =====
  
-  * **//​build//​** - specifies ​the path where the Dockerfile the container will be built from is located +To check if the installation was successfulwe can run simple Hello World container:
-  * **//​image//​** - specifies the name of the image used to run the container +
-  * **//​ports//​** - a list of entries with the format "​host_port:​service_port"​which specifies which ports are exposed and/or mapped +
-  * **//​volumes//​** - a list of entries with the format "​host_volume:​service_path"​ where the volume mappings are specified; the same rules that apply to CLI commands are maintained here as well; "​host_volume" ​can be standard volume or a bind mount +
-  * **//​networks//​** - the list of networks which the service/container ​belongs to +
-  * **//​environment//​** - object with entries of type "​service_variable_namevalue" which injects the environment variables specified when running the service/​container.+
  
-<note important>​ The **//​build//​** and **//​image//​** attributes are mutually exclusive.</​note>+<code bash> 
 +$ docker container run hello-world
  
-=== Volumes ===+Unable to find image '​hello-world:​latest'​ locally 
 +latest: Pulling from library/​hello-world 
 +c1ec31eb5944:​ Pull complete  
 +Digest: sha256:​d000bc569937abbe195e20322a0bde6b2922d805332fd6d8a68b19f524b7d21d 
 +Status: Downloaded newer image for hello-world:​latest
  
-The //​**volumes**//​ attribute describes the volumes used in the configuration. Volumes are passed as objects. If we do not want to change the default configuration,​ the value is an empty field.+Hello from Docker! 
 +This message shows that your installation appears ​to be working correctly.
  
-<note tip>The top-level **//​volumes//​** property must be written at the same indentation level as **//​services//​** and it should not be confused with the child property **//​volumes//​** within ​the service configuration.</​note>​+To generate this message, Docker took the following steps: 
 + ​1. ​The Docker client contacted the Docker daemon. 
 + 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. 
 +    (amd64) 
 + 3. The Docker daemon created a new container from that image which runs the 
 +    executable that produces ​the output you are currently reading. 
 + 4. The Docker daemon streamed that output to the Docker client, which sent it 
 +    to your terminal.
  
-=== Networks ===+To try something more ambitious, you can run an Ubuntu container with: 
 + $ docker run -it ubuntu bash
  
-The //**networks**//​ attribute describes the networks used in the configurationNetworks are passed in the form of objectsIf we do not want to change the default configuration,​ the value is an empty field. An example of a network configuration is the following (where we use a network that already exists because it was created independently of the Docker Compose file):+Share images, automate workflows, and more with a free Docker ID: 
 + ​https:​//hub.docker.com/
  
-<code yaml> +For more examples and ideas, visit
-networks+ https://​docs.docker.com/​get-started/
-    ​my-network-that-actually-exists: +
-        external: true +
-        name: the-original-network-that-already-exists+
 </​code>​ </​code>​
  
-In the example ​above, ​//​**my-network-that-actually-exists**// ​is just "​renaming"​ of an existing network.+The execution output ​above shows us the steps Docker takes in the background when running this container. Specificallyif the image we want to run in a container ​is not available locally, it is pulled from the registry, and then new container is created based on that image, where the desired application is running.
  
-<note tip>The top-level property **//​networks//​** must be written at the same indentation level as **//​services//​** and it should not be confused with the child property **//​networks//​** within the service configuration.</​note>​+===== Running a container =====
  
-===== Docker ​Compose Commands =====+We have seen above how we can run a Hello World in a simple container, but we can run containers from much more complex images. We can create our own image (as we will see later) or download an image from a public registry, such as [[https://​hub.docker.com|Docker ​Hub]]. It contains public images, ranging from operating systems (Ubuntu, Alpine, Amazon Linux, etc.) to programming languages ​​(Java,​ Ruby, Perl, Python, etc.), web servers (NGINX, Apache), and more.
  
-The commands for interacting with Docker Compose are syntactically similar to the classic Docker commands for both version 1 and version 2. Below, you can see the most used commands. ​For version 1the utility is //​**docker-compose**//​while for version 2 the utility ​is //​**docker**// ​with the //​**compose**//​ parameter. For more information,​ you can read [[https://​docs.docker.com/​compose/​reference/​|the official documentation for version 1]] and [[https://​docs.docker.com/​engine/​reference/​commandline/​compose/​|the official version 2 documentation]]. The main difference between versions 1 and 2 is that there are several additional commands ​in version 2.+For this labwe will use Alpine Linuxwhich is a lightweight Linux distribution (with a size of barely 5 MB). The first step is to download the image from a Docker registry (in our case, Docker Hub):
  
 <code bash> <code bash>
-$ docker-compose start                        # starts containers (V1) +$ docker ​image pull alpine 
-$ docker-compose stop                         # stops containers (V1) +</​code>​
-$ docker-compose pause                        # pauses containers using a SIGPAUSE signal (V1) +
-$ docker-compose unpause ​                     # removes containers from the pause state (V1) +
-$ docker-compose ps                           # lists active containers (V1) +
-$ docker-compose up                           # builds, recreates, starts and attaches containers to a service (V1) +
-$ docker-compose up -d                        # starts services in the background, detached from the terminal that initialised them (V1) +
-$ docker-compose up --build ​                  # creates images before starting containers (V1) +
-$ docker-compose -f my-compose.yml up         # uses the specified Compose file instead of the default one (V1) +
-$ docker-compose down                         # stops containers and deletes them, along with the networks, volumes, and images created (V1) +
-$ docker-compose rm                           # deletes all stopped containers (V1) +
-$ docker-compose rm -s -v                     # with -s all containers are stopped and with -v the anonymous volumes attached are also deleted (V1)+
  
-$ docker compose start                        # starts containers (V2) +To see all the images present on our system, we can run the following command: 
-$ docker compose stop                         # stops containers (V2) + 
-$ docker compound pause                       # pauses containers using a SIGPAUSE signal (V2) +<code bash> 
-$ docker ​compose unpause ​                     # removes containers from the pause state (V2) +$ docker ​image ls 
-$ docker compose ps                           # lists active containers (V2) +  
-$ docker compose ls                           # lists all container stacks (V2) +REPOSITORY ​     TAG         IMAGE ID        CREATED ​        SIZE 
-$ docker compose -p my-proj -f my-comp.yml up # uses the specified Compose file instead of the default one and sets a project name (V2) +alpine ​         latest ​     05455a08881e ​   3 weeks ago     7.38MB
-$ docker compose down                         # stops containers and deletes them, along with the networks, volumes, and images created (V2) +
-$ docker compose rm                           # deletes all stopped containers (V2) +
-$ docker compose rm -s -v                     # with -s all containers are stopped and with -v the anonymous volumes attached are also deleted (V2)+
 </​code>​ </​code>​
  
-===== Combining multiple Docker Compose files =====+It can be seen above that the downloaded image has the name **//​alpine//​** and the tag **//​latest//​**. An image tag is a label that generally designates the version of the image, and **//​latest//​** is an alias for the latest version, set automatically when no tag is explicitly specified.
  
-Docker Compose is recommended for use only at the local development stageDocker Swarm (or other orchestratorssuch as Kubernetesshould be used for testing or production environments.+Once the image is downloaded, we can run it in a containerOne way to do this is by specifying a command to run inside the container ​(in our caseon the Alpine Linux operating system)
 + 
 +<code bash> 
 +$ docker container run alpine ls -l 
 +  
 +total 56 
 +drwxr-xr-x ​   2 root     ​root ​         4096 Jan 26 17:53 bin 
 +drwxr-xr-x ​   5 root     ​root ​          340 Feb 23 10:48 dev 
 +drwxr-xr-x ​   1 root     ​root ​         4096 Feb 23 10:48 etc 
 +drwxr-xr-x ​   2 root     ​root ​         4096 Jan 26 17:53 home 
 +[...
 +</​code>​
  
-Precisely because it is used in the development stageDocker Compose has a mechanism to combine several Compose files to create different running configurations without replicating ​the common parts.+<note tip>​In ​the command given abovewe can skip the **//​container//​** keyword altogether and only write **//docker run alpine ls -l//**.</​note>​
  
-To run Compose configuration based on multiple YAML filesyou can use the command ​below:+In the previous example, Docker finds the specified image, creates ​container from it, starts it, and then runs the command inside it. If we want interactive access inside the containerwe can use the following ​command:
  
 <code bash> <code bash>
-$ docker-compose -f file-compose-1.yml -f file-compose-2.yml up --build # V1 +$ docker ​run -it alpine
-$ docker compose -f file-compose-1.yml -f file-compose-2.yml up --build # V2+
 </​code>​ </​code>​
  
-In the command above, the information in //**file-compose-2.yml**// will overwrite or complete the information in //**file-compose-1.yml**//​. This is useful for quickly testing various configurations (or combinations pf configurations).+If we want to see which containers are currently runningwe can use the **//docker container ls//** commandIf we want to see the list of all the containers we ran, we also use the **//-a//** flag:
  
-====== Tasks ======+<code bash> 
 +$ docker container ls -a 
 +  
 +CONTAINER ID        IMAGE          COMMAND ​       CREATED ​            ​STATUS ​                        ​NAMES 
 +96e583b80c13 ​       alpine ​        "/​bin/​sh" ​     3 seconds ago       ​Exited (0) 1 second ago        fervent_ishizaka 
 +d3f65a167db3 ​       alpine ​         "ls -l" ​      42 seconds ago      Exited (0) 41 seconds ago      strange_ramanujan 
 +</​code>​
  
-{{namespace>​:ii:​labs:​s2:​04:​tasks&​nofooter&​noeditbutton}}+To run an image in a background container, we can use the **//-d//** flag. At startup, the new container'​s ID will be displayed, which we can then later use to attach to the container, stop it, delete it, etc .:
  
-[[https://​mobylabupb.com|{{:​ii:​labs:​s2:​moby_banner.png?​direct&​200|}}]]+<code bash> 
 +$ docker run -d -it alpine 
 +  
 +7919fb6e13ab9497fa12fa455362cb949448be207ad08e08e24a675a32c12919 
 +</code>
  
 +<code bash>
 +$ docker container ls
 + 
 +CONTAINER ID   ​IMAGE ​    ​COMMAND ​    ​CREATED ​         STATUS ​        ​PORTS ​    NAMES
 +7919fb6e13ab ​  ​alpine ​   "/​bin/​sh" ​  10 seconds ago   Up 9 seconds ​            ​elastic_knuth
 +</​code>​
 +
 +<code bash>
 +$ docker attach 7919fb6e13ab
 + 
 +/ # exit
 +</​code>​
 +
 +<code bash>
 +$ docker stop 7919fb6e13ab
 + 
 +7919fb6e13ab
 +</​code>​
 +
 +<code bash>
 +$ docker container ls
 +
 +CONTAINER ID      IMAGE        COMMAND ​       CREATED ​            ​STATUS ​           PORTS     NAMES
 +</​code>​
 +
 +<code bash>
 +$ docker rm 7919fb6e13ab
 + 
 +7919fb6e13ab
 +</​code>​
 +
 +<code bash>
 +$ docker container ls -a
 +
 +CONTAINER ID      IMAGE        COMMAND ​       CREATED ​            ​STATUS ​           PORTS     NAMES
 +</​code>​
 +
 +===== Creating an image =====
 +
 +So far, we only ran containers based on existing images, but now we will see how we can create and publish our own application. During the Docker labs, we will go through an entire cloud application hierarchy. For this particular lab, we will start from the bottom level of this hierarchy, which is represented by the **//​containers//​**. Above this level, there are the **//​services//​**,​ which define how the containers behave in production, and at the highest level there is the **//service stack//**, which defines the interactions between services. The sources for this example can be found in the **//​flask_app//​** folder of the {{:​ii:​labs:​s2:​04:​lab4_docker.zip|lab archive}}.
 +
 +In this example, we will create a web application using Flask (as studied in [[https://​ocw.cs.pub.ro/​courses/​ii/​labs/​s2/​02|lab 2]] and [[https://​ocw.cs.pub.ro/​courses/​ii/​labs/​s2/​02|lab 3]]), which displays a random picture each time its main page is accessed. The application'​s code can be found in a file called **//​app.py//​**,​ which looks like this:
 +
 +<file python app.py>
 +from flask import Flask, render_template
 +import random
 + 
 +app = Flask(__name__)
 + 
 +images = [
 +    "​https://​i.pinimg.com/​736x/​8f/​2a/​30/​8f2a30993c405b083ba8820ae6803b93.jpg",​
 +    "​https://​images.g2crowd.com/​uploads/​product/​image/​large_detail/​large_detail_1528237089/​microsoft-azure-biztalk-services.jpg",​
 +    "​https://​aptira.com/​wp-content/​uploads/​2016/​09/​kubernetes_logo.png",​
 +    "​https://​www.opsview.com/​sites/​default/​files/​docker.png"​
 +]
 + 
 +@app.route('/'​)
 +def index():
 +    url = random.choice(images)
 +    return render_template('​index.html',​ url=url)
 + 
 +if __name__ == "​__main__":​
 +    app.run(host="​0.0.0.0"​)
 +</​file>​
 +
 +As you can see in the Python file (and as you learned in previous labs), the web page is based on a template found in the **//​index.html//​** file, which should be located in the **//​templates//​** folder:
 +
 +<file html index.html>​
 +<​html>​
 +  <​head>​
 +    <style type="​text/​css">​
 +      body {
 +        background: black;
 +        color: white;
 +      }
 +      div.container {
 +        max-width: 500px;
 +        margin: 100px auto;
 +        border: 20px solid white;
 +        padding: 10px;
 +        text-align: center;
 +      }
 +      h4 {
 +        text-transform:​ uppercase;
 +      }
 +    </​style>​
 +  </​head>​
 +  <​body>​
 +    <div class="​container">​
 +      <​h4>​Cloud image of the day</​h4>​
 + 
 +      <img src="​{{url}}"​ />
 +    </​div>​
 +  </​body>​
 +</​html>​
 +</​file>​
 +
 +We also need a **//​requirements.txt//​** file, where we specify the Python packages to be installed in the image we are creating:
 +
 +<file txt requirements.txt>​
 +Flask>​=2.2.2
 +</​file>​
 +
 +An image is defined by a file called **//​Dockerfile//​**,​ which specifies what happens inside the container we want to create, where access to resources (such as network interfaces or hard disks) is virtualised and isolated from the rest of the system. With this file, we can specify port mappings, files that will be copied to the container when it is run, and so on. A Dockerfile is somewhat similar to a Makefile, and each line in it describes a layer in the image. Once we have defined a correct Dockerfile, our application will always behave identically,​ no matter in what environment it is run. An example of a Dockerfile for our application is as follows:
 +
 +<file txt Dockerfile>​
 +FROM alpine:edge
 +
 +RUN apk add --update py3-pip
 +RUN python3 -m venv /venv
 +
 +ENV PATH="/​venv/​bin:​$PATH"​
 +
 +COPY requirements.txt /​usr/​src/​app/​
 +RUN pip install --no-cache-dir -r /​usr/​src/​app/​requirements.txt
 +
 +COPY app.py /​usr/​src/​app/​
 +COPY templates/​index.html /​usr/​src/​app/​templates/​
 +
 +EXPOSE 5000
 +
 +CMD ["​python3",​ "/​usr/​src/​app/​app.py"​]
 +</​file>​
 +
 +In the above file, we have the following commands:
 +
 +  * ** FROM ** - specifies an image which our new image is based on (in our case, we start from a basic Alpine image found on Docker Hub, where we will run our Flask app)
 +  * ** COPY ** - copies files from a local directory to the image we are creating
 +  * ** RUN ** - runs a command (in the example above, we first install the **//pip//** Python package installer, then we install the Python packages listed in the **//​requirements.txt//​** file, i.e., Flask)
 +  * ** ENV ** - sets and environment variable
 +  * ** EXPOSE ** - exposes a port outside the container
 +  * ** CMD ** - specifies a command that will be run when the container is started (in this case, we run **//​app.py//​** with Python).
 +
 +<note important>​
 +When setting a base image using FROM, it is recommended that we explicitly specify the version of the image instead of using the **//​latest//​** tag, as the latest version may no longer be compatible with our components in the future.
 +</​note>​
 +
 +<note tipi>
 +The EXPOSE statement does not actually expose the port given as a parameter. Instead, it functions as a kind of documentation between the developer who builds the image and the developer who runs the container, in terms of which ports to publish. To publish a port when running a container, we need to use the **//-p//** flag of the **//docker run//** command (as will be seen below).
 +</​note>​
 +
 +Finally, we end up with the following file structure:
 +
 +<code bash>
 +$ tree
 +.
 +├── app.py
 +├── requirements.txt
 +└── templates
 +    └── index.html
 +</​code>​
 +
 +To build an image for our Flask application,​ we run the command below in the current directory (the **//-t//** flag is used to tag the image):
 +
 +<code bash>
 +$ docker build -t testapp .
 + 
 +[+] Building 12.6s (12/12) FINISHED ​                                                             ​
 +=> [internal] load .dockerignore
 + => => transferring context: 2B
 + => [internal] load build definition from Dockerfile
 + => => transferring dockerfile: 577B
 + => [internal] load metadata for docker.io/​library/​alpine:​edge
 + => [1/7] FROM docker.io/​library/​alpine:​edge@sha256:​9f867[...]
 + => => resolve docker.io/​library/​alpine:​edge@sha256:​9f867[...]
 + => => sha256:​91988[...] 1.47kB / 1.47kB
 + => => sha256:​dccce[...] 3.41MB / 3.41MB
 + => => sha256:​9f867[...]a5cc0 1.85kB / 1.85kB
 + => => sha256:​60eda[...] 528B / 528B
 + => => extracting sha256:​dccce[...]
 + => [internal] load build context
 + => => transferring context: 2.01kB
 + => [2/7] RUN apk add --update py3-pip
 + => [3/7] RUN python3 -m venv /venv
 + => [4/7] COPY requirements.txt /​usr/​src/​app/​
 + => [5/7] RUN pip install --no-cache-dir -r /​usr/​src/​app/​requirements.txt
 + => [6/7] COPY app.py /​usr/​src/​app/​
 + => [7/7] COPY templates/​index.html /​usr/​src/​app/​templates/​
 + => exporting to image
 + => => exporting layers
 + => => writing image sha256:​c82b4[...]
 + => => naming to docker.io/​library/​testapp
 + [...]
 +</​code>​
 +
 +To check if the image was created successfully,​ we use the following command:
 +
 +<code bash>
 +$ docker images
 + 
 +REPOSITORY ​   TAG       IMAGE ID       ​CREATED ​        SIZE
 +testapp ​      ​latest ​   c82b48d0b86e ​  9 minutes ago   101MB
 +</​code>​
 +
 +We can get more details about the new image using the following command:
 +
 +<code bash>
 +$ docker image inspect testapp
 + 
 +[
 +    {
 +        "​Id":​ "​sha256:​c82b48d0b86e9a4113495f3f2d97d7b336d6f662ce38105cf1be8af6f3d8ba44",​
 +        "​RepoTags":​ [
 +            "​testapp:​latest"​
 +        ],
 +        "​RepoDigests":​ [],
 +        "​Parent":​ "",​
 +        "​Comment":​ "​buildkit.dockerfile.v0",​
 +        "​Created":​ "​2024-02-23T10:​54:​09.271834361Z",​
 +        "​Container":​ "",​
 +        [...]
 +        "​DockerVersion":​ "",​
 +        "​Author":​ "",​
 +        "​Config":​ {
 +            [...]
 +            "​ExposedPorts":​ {
 +                "​5000/​tcp":​ {}
 +            },
 +            "​Tty":​ false,
 +            "​OpenStdin":​ false,
 +            "​StdinOnce":​ false,
 +            "​Env":​ [
 +                "​PATH=/​venv/​bin:/​usr/​local/​sbin:/​usr/​local/​bin:/​usr/​sbin:/​usr/​bin:/​sbin:/​bin"​
 +            ],
 +            "​Cmd":​ [
 +                "​python3",​
 +                "/​usr/​src/​app/​app.py"​
 +            ],
 +            [...]
 +        },
 +        "​Architecture":​ "​amd64",​
 +        "​Os":​ "​linux",​
 +        "​Size":​ 101076711,
 +        [...]
 +    }
 +]
 +</​code>​
 +
 +The image can now be found in the local Docker image registry and can be run with the following command:
 +
 +<code bash>
 +$ docker container run -p 8888:5000 testapp
 + 
 +* Serving Flask app '​app'​
 + * Debug mode: off
 + * Running on all addresses (0.0.0.0)
 + * Running on http://​127.0.0.1:​5000
 + * Running on http://​172.17.0.2:​5000
 +[...]
 +</​code>​
 +
 +<note important>​In the command given above, it can be seen that we followed the documentation and exposed port 5000 using the **//-p//** flag, as discussed previously. However, in this particular case, we chose to map the container'​s port 5000 to our host's port 8888.</​note>​
 +
 +By accessing the address [[http://​127.0.0.1:​8888]] from a web browser, we will see the web application we have created.
 +
 +===== Publishing an image to a registry =====
 +
 +Previously, we created a Docker image that we ran locally in a container. In order to be able to use the image created in any other system, it is necessary to publish it, i.e., to upload it to a registry in order to be able to deploy containers based on it in production. A registry is a collection of repositories,​ and a repository is a collection of images (similar to GitHub, except that, in a Docker registry, the code is already built). There are many registries for Docker images (Docker Hub, Gitlab Registry, etc.), but in this lab we will use the public Docker registry, since it is free and pre-configured.
 +
 +We will start from the previous application. The first step in publishing an image is to create an account at [[https://​hub.docker.com]]. Next, logging in from the local machine is done by the following command:
 +
 +<code bash>
 +$ docker login
 + 
 +Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://​hub.docker.com to create one.
 +Username: ​
 +Password: ​
 +Login Succeeded
 +</​code>​
 +
 +We can specify the username and password directly in the command, so the above command can be written as (where the default server, if we choose to omit that particular parameter, is Docker Hub):
 +
 +<code bash>
 +$ docker login [–u <​USER>​ –p <​PASSWORD>​] [SERVER]
 +</​code>​
 +
 +Before publishing the image to the registry, it must be tagged with the **//​username/​repository:​tag//​** format. The tag is optional, but it is useful because it denotes the version of a Docker image. We use the following command to tag an image (in the example below, the user is **//​mobylab//​**,​ the repository is **//​iapp//​**,​ and the tag is **//​example//​**):​
 +
 +<code bash>
 +$ docker tag testapp mobylab/​ia1:​example
 +</​code>​
 +
 +<code bash>
 +$ docker images
 +
 +REPOSITORY ​   TAG       IMAGE ID       ​CREATED ​         SIZE
 +testapp ​      ​latest ​   c82b48d0b86e ​  16 minutes ago   101MB
 +mobylab/​ia1 ​  ​example ​  ​c82b48d0b86e ​  16 minutes ago   101MB
 +alpine ​       latest ​   05455a08881e ​  3 weeks ago      7.38MB
 +hello-world ​  ​latest ​   d2c94e258dcb ​  9 months ago     ​13.3kB
 +</​code>​
 +
 +Once the image is tagged, it can be published to the registry:
 +
 +<code bash>
 +$ docker push mobylab/​ia1:​example
 +</​code>​
 +
 +From this point on, the image will be visible on [[https://​hub.docker.com]],​ where it can be pulled and run on any host, server or cloud system:
 +
 +<code bash>
 +$ docker run -p 8888:5000 mobylab/​ia1:​example
 +
 +Unable to find image '​mobylab/​ia1:​example'​ locally
 +example: Pulling from mobylab/ia1
 +dcccee43ad5d:​ Pull complete
 +[...]
 +dc5f08788709:​ Pull complete
 +Digest: sha256:​72824[...]
 +Status: Downloaded newer image for mobylab/​ia1:​example
 + * Serving Flask app '​app'​
 + * Debug mode: off
 + * Running on all addresses (0.0.0.0)
 + * Running on http://​127.0.0.1:​5000
 + * Running on http://​172.17.0.2:​5000
 +[...]
 +</​code>​
 +
 +===== Useful commands =====
 +
 +<note tip>
 +We encourage you to read further from [[https://​docs.docker.com/​reference/​|the official Docker site]], as the commands shown here are the bare minimum you need in order to be able to work with Docker. In reality, there are many more commands, each with a variety of arguments.
 +</​note>​
 +
 +==== System ====
 +
 +<code bash>
 +$ docker <​COMMAND>​ --help ​ # shows complete information about a command
 +$ docker version ​          # shows Docker'​s version and other minor details
 +$ docker info              # shows complete Docker information
 +$ docker system prune      # clears space by deleting unused components
 +</​code>​
 +
 +==== Image interaction ====
 +
 +<code bash>
 +$ docker image pull <​IMAGE>​. ​     # downloads an image to the local cache
 +$ docker build -t <TAG> .         # builds an image from a Dockerfile located in the current folder
 +
 +$ docker image ls                 # lists the images in the local cache
 +$ docker images ​                  # lists the images in the local cache
 +
 +$ docker image rm <​IMAGE> ​        # deletes an image from the local cache
 +$ docker rmi <​IMAGE> ​             # deletes an image from the local cache
 +
 +$ docker image inspect <​IMAGE> ​   # shows information about an image
 +</​code>​
 +
 +==== Container interaction ====
 +
 +<code bash>
 +$ docker container run <​IMAGE>​ [COMMAND] ​   # runs a container and optionally sends it a starting command
 +$ docker container run -it <​IMAGE> ​         # runs a container in interactive mode
 +$ docker container run -d <​IMAGE> ​          # runs a container in the background (as a daemon)
 +
 +$ docker exec -it <​IMAGE>​ <​COMMAND> ​        # starts a terminal in a running container and executes a command
 +
 +$ docker container ls                       # lists all running containers
 +$ docker container ls -a                    # lists all containers that were run or are running
 +$ docker container inspect <​ID> ​            # shows information about a container
 +
 +$ docker attach <​ID> ​                       # attaches to a container
 +$ docker stop <​ID> ​                         # stops a container
 +$ docker restart <​ID> ​                      # restarts a container
 +$ docker rm <​ID> ​                           # deletes a container
 +
 +$ docker ps                                 # lists running containers
 +$ docker logs <​ID> ​                         # shows logs from a container
 +$ docker top <​ID> ​                          # shows the processes running in a container
 +</​code>​
 +
 +<note tip>
 +The difference between the **//​exec//​** and **//​attach//​** commands (which might appear similar) is that **//​attach//​** associates a terminal to a container, which means that, when we exit that terminal, we also exit the container. This is not the case for the **//​exec//​** command.
 +</​note>​
 +
 +==== Working with a registry ====
 +
 +<code bash>
 +$ docker login [–u <​USER>​ –p <​PASSWORD>​] [SERVER] ​  # logs a user into a registry
 +$ docker tag <​IMAGE>​ <​USER/​REPOSITORY:​TAG> ​         # tags an image for registry push
 +$ docker push <​USER/​REPOSITORY:​TAG> ​                # pushes an image to a registry
 +</​code>​
 +
 +====== Tasks ======
 +
 +{{namespace>:​ii:​labs:​s2:​04:​tasks&​nofooter&​noeditbutton}}
ii/labs/s2/04.1652893572.txt.gz · Last modified: 2022/05/18 20:06 by radu.ciobanu
CC Attribution-Share Alike 3.0 Unported
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0