student
[student@scgc ~] $ cd scgc [student@scgc ~/scgc] $ wget --user=<username> --ask-password https://repository.grid.pub.ro/cs/scgc/laboratoare/lab-05.zip [student@scgc ~/scgc] $ unzip lab-05.zip
scgc
directory should be present an iso
image and the base.qcow2
image that is going to be used throughout the exercises.
ssh
connections:$ ssh -X <host>
root
user and using the X11, please retrieve the X credentials using the commands below: [root@scgc ~] xauth > q [root@scgc ~] xauth add $(xauth -f /home/student/.Xauthority list | tail -1)
Computational centers are using virtualization at a large scale since it gives the necessary flexibility in managing compute resources. In order to improve performance in a virtualized environment, processors have introduces features and specific instructions that enable guest operating systems to run uninterrupted and unmodified. The software entity that is responsible to facilitate this type of interaction between hardware and the guest OS is called a hypervisor.
KVM stands for Kernel Virtual Machine and is a kernel-level hypervisor that implements native virtualization. In the following tasks, we will learn how to interaction with this virtualization solution.
First of all, we must verify that the underlying hardware has support for native virtualization. The virtualization extensions name's depend on the hardware manufacturer as follows:
Let's verify the existence of this extensions on our hardware:
[student@scgc ~] $ cat /proc/cpuinfo |grep vmx flags : ... vmx ...
In order to use KVM we need to install the qemu-kvm
package, which contains the qemu
userspace tool that actually starts the virtual machines and transmits all their parameters to the hypervisor:
[student@scgc ~] $ sudo apt-get install qemu-kvm
Before we can start a virtual machine, we need to verify that the KVM kernel module is loaded:
[student@scgc ~] $ lsmod | grep kvm kvm_intel 143187 0 kvm 455835 1 kvm_intel
As we can see above, besides the kvm module there is also a kvm_intel
. This means, at the moment, this machine can only support x86 guests using KVM. For each architecture there is a different kernel module. Loading the KVM kernel module leads to the creation of a char device /dev/kvm
through which any communication is made using ioctl
IO operations:
[student@scgc ~] $ ls -l /dev/kvm crw-rw---- 1 root kvm 10, 232 Mar 20 07:18 /dev/kvm
In order to start a vm, the kvm
command line tool is used. The user that starts the vm needs to be the root
user or be part of the system group that owns the /dev/kvm
char device (in this case, the kvm
group).
From this point on, please use the root
user, unless otherwise specified.
Let's create a virtual machine with 256MB RAM (-m
), 2 processors (-smp
) and a storage device backed by the base.qcow2
image (-hda
) :
root@scgc:/home/student/scgc# kvm -hda base.qcow2 -m 256 -smp 2
After issuing the previous command, a new window was started on the host system and you can see the guest log.
Let's verify the resources used for this virtual machine by inspecting the /proc
filesystem. After opening a new terminal on the host system, check the kvm threads running:
root@scgc:~# ps -eLf |grep kvm root 18199 18189 18199 2 4 18:09 pts/5 00:00:00 qemu-system-x86_64 -enable-kvm -hda base.qcow2 -m 256 -smp 2 root 18199 18189 18200 0 4 18:09 pts/5 00:00:00 qemu-system-x86_64 -enable-kvm -hda base.qcow2 -m 256 -smp 2 root 18199 18189 18201 81 4 18:09 pts/5 00:00:09 qemu-system-x86_64 -enable-kvm -hda base.qcow2 -m 256 -smp 2 root 18199 18189 18202 0 4 18:09 pts/5 00:00:00 qemu-system-x86_64 -enable-kvm -hda base.qcow2 -m 256 -smp 2
Stop the KVM machine by using CTRL + C
in the console used to start it. Start a new machine that now has 4 processors and 512MB of RAM. Show again the KVM threads running:
root@scgc:~# ps -eLf |grep kvm root 18564 18189 18564 2 6 18:12 pts/5 00:00:00 qemu-system-x86_64 -enable-kvm -hda base.qcow2 -m 512 -smp 4 root 18564 18189 18565 0 6 18:12 pts/5 00:00:00 qemu-system-x86_64 -enable-kvm -hda base.qcow2 -m 512 -smp 4 root 18564 18189 18566 84 6 18:12 pts/5 00:00:03 qemu-system-x86_64 -enable-kvm -hda base.qcow2 -m 512 -smp 4 root 18564 18189 18567 0 6 18:12 pts/5 00:00:00 qemu-system-x86_64 -enable-kvm -hda base.qcow2 -m 512 -smp 4 root 18564 18189 18568 0 6 18:12 pts/5 00:00:00 qemu-system-x86_64 -enable-kvm -hda base.qcow2 -m 512 -smp 4 root 18564 18189 18569 0 6 18:12 pts/5 00:00:00 qemu-system-x86_64 -enable-kvm -hda base.qcow2 -m 512 -smp 4
Comparing the output, we can see that with each new processor added to the VM a new KVM thread is started. The number of running threads is not equal to the number of processor because the remaining ones (in our case, 2 threads) are management threads.
When interacting with vms we do not usually want to start the console in foreground, but rather start the vm in background and just in case we need access to its terminal, connect to its console. Using the -vnc
option, the kvm
will start a VNC server and will export the vm's console through it:
root@scgc:/home/student/scgc# kvm -hda base.qcow2 -m 512 -smp 4 -vnc :1
Still, the kvm process is in foreground. We need to add the –daemonize
parameter:
root@scgc:/home/student/scgc# kvm -hda base.qcow2 -m 512 -smp 4 -vnc :1 -daemonize
The –vnc :1
parameter enables the VNC server on port #1 of the protocol. In order to find the exact TCP port that the VNC server is listening, wee need to add 5900
to the number used as –vnc
argument, in our case 5901
. We can verify this using the netstat
command:
root@scgc:/home/student/scgc# netstat -tlpn
The KVM machine is running in the background and we can interact with it only by connecting to its VNC exported console on port 5901 using the vncviewer
tool from package xtightvncviewer
:
root@scgc:/home/student/scgc# vncviewer localhost:5901
Stop the VM executing poweroff
from its console. Start again the VM in background, now using 2 processors and 256 of RAM.Find a way to stop the VM from the host system.
In the previous task, we have started a virtual machine using a disk image already created - base.qcow2
. The qcow2
extension stands for QEMU Copy-on-write and enables us to create multiple layered images with a read-only base image. Using the base.qcow2
image as base, for each VM that we want to start, we will create a new qcow2 image that will host all the changes for that specific VM. Examples on how to create this layered setup will be present in the next tasks.
In the next steps, we will create a new qcow2 image in which we will install an OS from a iso
image. For this task, the qemu-img
tool is used (if not installed on the system, package qemu-utils
has the tool).
root@scgc:/home/student/scgc# qemu-img create -f qcow2 virtualdisk.qcow2 2G Formatting 'virtualdisk.qcow2', fmt=qcow2 size=2147483648 cluster_size=65536 lazy_refcounts=off refcount_bits=16
The first argument of the qemu-img
tool is the command, in our case create
. The type of image to create needs to be specified (-f qcow2
), the name of the virtual disk (virtualdisk.qcow2
) and also its maximum dimension (2G
).
The installation process takes as input an installation CD (in the .iso
format). The kvm
command enables us to add a cdrom
device. Use the debian-10.3.0-amd64-netinst.iso
image as the parameter of -cdrom
and the virtual disk previously created mounted on -hda
:
root@scgc:/home/student/scgc# kvm -hda virtualdisk.qcow2 -cdrom debian-10.3.0-amd64-netinst.iso -m 256 -smp 2
The virtual image will boot from the CD because no bootloader is on the virtual disk. Start the installation process using the default steps. From this point on, the installation is exactly the same as using real hardware.
Once the installation has started, stop the virtual machine using one of the aforementioned methods and delete the virtual disk image.
A usual configuration of a VM consists of 2 virtual disks: one main disk that hosts the OS and a second one that hosts the actual user data.
Create a new virtual disk in the qcow2
format, with a 1G maximum size and attach it to a VM that has as a main disk the base.qcow2
image. The virtual machine should have 256MB and 2 cpus. Hint : -hdb.
Notice that the size of the new qcow2 image is extremely small. This is because the qcow2 format does not pre-allocate the maximum size of data beforehand, but rather it just expands when the user writes to it.
root@scgc:/home/student/scgc# du -sh <image-name.qcow2>
After you have started the VM, check for the /dev/sdb
block device and then create 2 partitions, each of 500MB and format them using the ext4
filesystem. Mount both partitions and create in each one of them a file of 100MB.
Check for the size for the image from the host system.
Stop the VM and delete the qcow2 image.
Using the base.qcow2
, the goal is to start 2 VMs without creating 2 copies of the disk image. For this, we will employ the copy-on-write feature of the qcow2 format.
Create a new image that will host only the differences from the base one using the qemu-img
command and its -b
parameter for specifying a backing/base image:
root@scgc:/home/student/scgc# qemu-img create -f qcow2 -b base.qcow2 sda-vm1.qcow2 Formatting 'sda-vm1.qcow2', fmt=qcow2 size=8589934592 backing_file='base.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off root@scgc:/home/student/scgc# ls -lh sda-vm1.qcow2 -rw-r--r-- 1 root root 193K Mar 25 19:12 sda-vm1.qcow2
Start a new VM using the sda-vm1.qcow2
image.
Create a 100MB file in the new VM and then again find the size for both qcow2 images. Notice that the base.qcow2
image size is the same while the sda-vm1.qcow2
image has grown.
Stop the VM and delete the previously create image - sda-vm1.qcow2
.
Another qemu-img
useful command is convert
. We can find ourselves wanting to transform a qcow2 image in a .vmdk
format (the one used by VMware virtual machines) or in a .vdi
(the one used by VirtualBox), without going through another painful installation process.
root@scgc:/home/student/scgc# qemu-img convert -O vdi base.qcow2 base.vdi
The -O
parameter specifies the output image format. Notice how specifying the input format is not necessary since qemu-img
is capable to detect it using qemu-img info
:
root@scgc:/home/student/scgc# qemu-img info base.qcow2 image: base.qcow2 file format: qcow2 virtual size: 8.0G (8589934592 bytes) disk size: 1.7G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false root@scgc:/home/student/scgc# qemu-img info base.vdi image: base.vdi file format: vdi virtual size: 8.0G (8589934592 bytes) disk size: 1.7G cluster_size: 1048576
Start by creating 2 virtual disks, sda-vm1.qcow2
and sda-vm2.qcow2
, based on the base.qcow2
backing image.
Until now, we have created KVM machines that do not have any networking connectivity. In this task, the goal is to enable the VMs to access the Internet. In order to accomplish this, when creating a vm the -net
parameter should be provided:
root@scgc:/home/student/scgc# kvm -hda sda-vm1.qcow2 -m 256 -smp 2 -net nic,model=e1000,macaddr=00:11:22:33:44:55 -net tap,ifname=tap-vm1 -vnc :1 -daemonize
Using the -net nic
parameter we can specify some properties for the interface that that will be emulated in the guest system. In this case, the guest will have a Intel e1000 network card with the 00:11:22:33:44:55 MAC address.
The second parameter given to the kvm
command is -net tap
and specifies the type of the host system network interface that will be directly connected to the e1000
from the VM.
Start a second VM using the sda-vm2.qcow2
image which will also have a e1000
interface with the MAC AA:11:22:33:44:55 MAC address. Also, tap-vm2
will be the interface visible in the host.
root@scgc:/home/student/scgc# kvm -hda sda-vm2.qcow2 -m 256 -smp 2 -net nic,model=e1000,macaddr=AA:11:22:33:44:55 -net tap,ifname=tap-vm2 -vnc :2 -daemonize
Next, change the hostname of the KVMs to vm1
and vm2
:
root@VM:~# hostname vm1 root@VM:~# su - root@VM1:~# root@VM:~# hostname vm2 root@VM:~# su - root@VM2:~#
We have successfully created 2 virtual links between the KVM guests and the host system. In order to connect both the physical machine and the guests to the same network, we will use a bridge
, a virtual switch implemented in the Linux kernel.
Start by creating a bridge named br0
and connect the tap interfaces to it:
root@scgc:~# brctl addbr br0 root@scgc:~# ip link set dev br0 up root@scgc:~# brctl addif br0 tap-vm1 root@scgc:~# brctl addif br0 tap-vm2 root@scgc:~# brctl show br0 bridge name bridge id STP enabled interfaces br0 8000.8ac179cb859f no tap-vm1 tap-vm2
Configure IP address 192.168.1.1/24
on the br0
and IP addresses 192.168.1.2/24
and 192.168.1.3/24
on the ens3
interfaces from within the guests. Verify the connectivity between all 3 hosts.
Enable routing and NAT on the hosts system so that the guests will have Internet access:
root@scgc:~# echo 1 > /proc/sys/net/ipv4/ip_forward root@scgc:~# iptables -t nat -A POSTROUTING -o ensX -j MASQUERADE
where ethX
is the network interface from the host system connected to the exterior.
All there is left to do is to configure the default route and the DNS server on the VMs. Connect to the guests using ssh
and add the default route to be 192.168.1.1
(the physical system) and the DNS server 8.8.8.8
. Check by pinging www.google.ro
.
Remove the tap interfaces from the bridge and then delete it:
root@scgc:~# brctl delif br0 tap-vm1 root@scgc:~# brctl delif br0 tap-vm2 root@scgc:~# ip link set dev br0 down root@scgc:~# brctl delbr br0
Finally, stop the virtual machines.
QEMU is extremely useful when debugging the Linux kernel because it exposes detailed information about the guest system. QEMU has a monitor
interface that the developer can interact with in order to inspect the VM state.
In order to export the QEMU monitor, the following parameter can be used:
-monitor telnet:127.0.0.1:4445,server,nowait
.
Here, the console is exported through a telnet
accessible connection on the localhost
server, port 4445
. Also, the guest will start automatically without waiting for a monitor connection by using the nowait
flag.
Start a KVM machine using one of the previously created virtual disks and export its monitor.
Now connect to it using telnet
:
root@scgc:~# telnet localhost 4445
Use help
to view a list of possible commands. Some of the useful commands are:
(qemu) info registers (qemu) info network (qemu) info block (qemu) info cpus
Stop the virtual machine and delete all virtual disks.
The libvirt
library was created in order to help users interact with virtual machines and containers with a greater ease. This library exposes a common interface for a multitude of technologies (KVM, LXC etc) and is commonly used in Cloud opensource projects such as Openstack, oVirt etc.
For system administrators, a command line interface was developed called virsh
as a front-end for libvirt.
libvirt
we must first install the following packets: libvirt-bin
, virtinst
, virt-viewer
and virt-top
. Also, if we do not intend to use libvirt as root
, the user should be added to the libvirtd
group.
As a first step, we must enable the networking service provided by libvirt
:
root@scgc:~# virsh -c qemu:///system net-start default
In order to create a KVM guest we can use the virt-install
tool:
root@scgc:/home/student/scgc# virt-install --connect qemu:///system --name VM1 --hvm --ram 256 --disk path=base.qcow2,format=qcow2 --network network=default --vnc --import
The parameters have the following meaning:
* ''--connect qemu:///system'' - connect to the local system * ''--name VM1'' - name of the virtual machine to be created * ''--hvm'' - use the hardware virtualization support (otherwise the VM will be emulated entirely by QEMU) * ''--ram 256'' - size of RAM * ''--disk path=base.qcow2,format=qcow2'' - virtual disk name and its format * ''--network network=default'' - add a network interface with default properties * ''--vnc'' - export the VNC console * ''--import'' - use the ''base.qcow2'' image and do not create a new one based on it
After running the above mentioned command, a configuration file in the XML format was created by libvirt in the following path: /etc/libvirt/qemu/VM1.xml
.
In order to control the VM we will use the virsh
console. Connect to the local daemon and list
the running VMS:
root@scgc:~# virsh Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # list Id Name State ---------------------------------------------------- 3 VM1 running
Notice the state of the virtual machines and its ID (in this case 3). The following operations will be issued using this ID.
Display the VNC port of the VM1
guest using the vncdisplay
command followed by the VM ID:
virsh # vncdisplay 3 127.0.0.1:0
Open another terminal on the host system and connect to the VNC display using vncviewer
:
root@scgc:~# vncviewer :0
virsh
utility may open a viewer for the virtual machine when starting it. You must close the viewer before being able to connect to the virtual machine.
Close the VNC console. The guest will continue to run in background.
Now shutdown
the guest VM:
virsh # shutdown 3 Domain 3 is being shutdown virsh # list Id Name State ---------------------------------------------------- 3 VM1 running virsh # list Id Name State ----------------------------------------------------
To show all guest VMs regardless of their state, issue the list
with the –all
option:
virsh # list --all Id Name State ---------------------------------------------------- - VM1 shut off
Start again the VM:
virsh # start VM1 Domain VM1 started virsh # list Id Name State ---------------------------------------------------- 4 VM1 running
Notice that the IDs are allocated when the guests are started.
Execute the destroy
command on the guest ID. Notice that the VM has shutdown, but it is still not deleted.
virsh # destroy 4 Domain 4 destroyed virsh # list --all Id Name State ---------------------------------------------------- - VM1 shut off
Delete the VM defined previously using the undefine
commmand:
virsh # undefine VM1 Domain VM1 has been undefined virsh # list --all Id Name State ----------------------------------
undefined
command is issued, it will not be destroyed. It will disappear completely when it is stopped.
Create a new system group kvm
and add the user student
to this newly created group. Configure the system so that all the users that are in this group can start and manage KVM machines.