In this lab, we will deploy a minimal OpenStack cloud, comprising the basic services. After installation, we will configure security by creating users, tenants and roles.
We will install the following services:
Everything will be installed in an all-in-one VM. Also, this VM will serve as a compute node (hosting the hypervisor on which instances will run).
In the faculty's OpenStack cloud, launch an instance with the following parameters:
Connect to the VM using the username ubuntu
.
In /etc/hosts
, map the IP address of the instance to newton
:
$ cat /etc/hosts 127.0.0.1 localhost <IP ADDRESS> newton [...]
First, do a package upgrade:
$ sudo apt update $ sudo apt upgrade
Add the repository with the Ubuntu cloud packages for OpenStack version Newton. After that, upgrade the packages again:
$ sudo apt install software-properties-common $ sudo add-apt-repository cloud-archive:newton $ sudo apt update $ sudo apt dist-upgrade
apt-get update
command! If you do not enter it, an incorrect version of OpenStack will be installed!
Install the OpenStack client packages and reboot:
$ sudo apt install python-openstackclient
Reboot:
$ sudo reboot
For OpenStack to function, some additional services are required.
RabbitMQ is a message queue service that implements the AMQP protocol. It is used by all OpenStack services for asynchronous communication.
Install the package:
$ sudo apt install rabbitmq-server
Create the credentials for connecting to the service (e.g. username openstack, password student):
$ sudo rabbitmqctl add_user openstack student
Grant all permissions to the created user:
$ sudo rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Memcached is a memory caching service, used by OpenStack for caching authentication tokens.
Install the packages:
$ sudo apt install memcached python-memcache
Edit the configuration file (/etc/memcached.conf
) and modify the line with -l 127.0.0.1
so that the service will listen on all interfaces:
-l 0.0.0.0
Restart the service:
$ sudo service memcached restart
All OpenStack services store configuration and management data in an SQL database. For production environments, MySQL or PostgreSQL are recommended. For conserving the resources, we will use SQLite, for which no installation is necessary.
Install the package:
$ sudo apt install keystone
Edit the /etc/keystone/keystone.conf
in the [token]
section, configuring the Fernet token provider:
[token] ... provider = fernet
Populate the Keystone database:
$ sudo su -s /bin/sh -c "keystone-manage db_sync" keystone
Initialize Fernet key repositories:
$ sudo keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone $ sudo keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
Bootstrap the Identity service:
$ sudo keystone-manage bootstrap --bootstrap-password admin \ --bootstrap-admin-url http://newton:35357/v3/ \ --bootstrap-internal-url http://newton:5000/v3/ \ --bootstrap-public-url http://newton:5000/v3/ \ --bootstrap-region-id RegionOne
Restart the Apache2 service (Keystone runs inside Apache2):
$ sudo service apache2 restart
This OpenStack RC file will be used for authenticating as admin:
$ cat admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=admin export OS_AUTH_URL=http://newton:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
Source the RC file and create the service project:
$ source admin-openrc $ openstack project create --domain default --description "Service Project" service +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Service Project | | domain_id | default | | enabled | True | | id | b30d814c79cc4991960cf5cca17fafcb | | is_domain | False | | name | service | | parent_id | default | +-------------+----------------------------------+
Create the OpenStack internal glance
user, under which the glance
service will run. Choose a password (e.g. glance
):
$ openstack user create --domain default --password-prompt glance User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 7086eaefacf94b9fa1e2ee9d329a73c3 | | name | glance | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+
Grant this user the admin
role:
$ openstack role add --project service --user glance admin
glance
.
Register the service in the catalog:
$ openstack service create --name glance --description "OpenStack Image" image +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Image | | enabled | True | | id | 2faa251121e74a31b0e133e3250a8ace | | name | glance | | type | image | +-------------+----------------------------------
Create the admin
, internal
and public
endpoints:
$ openstack endpoint create --region RegionOne image public http://newton:9292 $ openstack endpoint create --region RegionOne image internal http://newton:9292 $ openstack endpoint create --region RegionOne image admin http://newton:9292
Install the package:
$ sudo apt install glance
Edit /etc/glance/glance-api.conf
by adding or editing lines.
The database specifies SQLite:
[database] connection = sqlite:////var/lib/glance/glance.db
The keystone credentials are correct:
[keystone_authtoken] auth_uri = http://newton:5000 auth_url = http://newton:35357 memcached_servers = newton:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = glance [paste_deploy] ... flavor = keystone
The backend store is file
:
[glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/
Edit /etc/glance/glance-registry.conf
in a similar way:
[database] connection = sqlite:////var/lib/glance/glance.db [keystone_authtoken] auth_uri = http://newton:5000 auth_url = http://newton:35357 memcached_servers = newton:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = glance [paste_deploy] # ... flavor = keystone
Populate the Glance database:
$ sudo su -s /bin/sh -c "glance-manage db_sync" glance
Restart the services:
$ sudo service glance-registry restart $ sudo service glance-api restart
Let's download an image for CirrOS (a minimal Linux distribution, which can run with as little as 64MB of RAM):
$ wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
Upload the image in Glance:
$ openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
Verify it was uploaded correctly:
$ openstack image list
Create the OpenStack internal nova
user, under which the nova
service will run. Choose a password (e.g. nova
):
$ openstack user create --domain default --password-prompt nova User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 7d517982cc19409790397c69e0066326 | | name | nova | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+
Grant this user the admin
role:
$ openstack role add --project service --user nova admin
nova
.
Register the service in the catalog:
$ openstack service create --name nova --description "OpenStack Compute" compute
Create the admin
, internal
and public
endpoints:
$ openstack endpoint create --region RegionOne compute public http://newton:8774/v2.1/%\(tenant_id\)s $ openstack endpoint create --region RegionOne compute internal http://newton:8774/v2.1/%\(tenant_id\)s $ openstack endpoint create --region RegionOne compute admin http://newton:8774/v2.1/%\(tenant_id\)s
Install the packages:
$ sudo apt install nova-api nova-conductor nova-scheduler nova-network
Multiple packages are needed for nova, because it is a complex service, with multiple components.
Edit /etc/nova/nova.conf
by adding or editing lines.
[DEFAULT] transport_url = rabbit://openstack:student@127.0.0.1 auth_strategy = keystone use_neutron = False my_ip = <IP ADDRESS OF YOUR VM> [database] connection = sqlite:////var/lib/nova/nova.sqlite [api_database] connection = sqlite:////var/lib/nova/nova_api.sqlite [keystone_authtoken] auth_uri = http://newton:5000 auth_url = http://newton:35357 memcached_servers = newton:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = nova [glance] api_servers = http://newton:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp
Populate the Nova database:
$ sudo su -s /bin/sh -c "nova-manage api_db sync" nova $ sudo su -s /bin/sh -c "nova-manage db sync" nova
Restart the services:
$ sudo service nova-api restart $ sudo service nova-scheduler restart $ sudo service nova-conductor restart $ sudo service nova-network restart
Because we cannot create an additional VM, we will configure the Nova-Compute service and the hypervisor on the same VM (thus making a hybrid Controller + Compute node).
Install the package:
$ sudo apt install nova-compute
Edit /etc/nova/nova-compute.conf
so that the QEMU hypervisor will be used:
[libvirt] ... virt_type = qemu
Restart the service:
sudo service nova-compute restart
Make sure that the VM is recognized as a compute node:
$ openstack compute service list
First. create a flavor with 1 vCPU, 64 MB RAM and 1 GB Disk:
$ openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
Boot an instance, with the m1.nano
flavor and the cirros
image. The instance will not be connected to a network:
$ nova boot --flavor m1.nano --image cirros --nic none my-vm
Use ps
to verify that the QEMU instance is actually running:
$ ps -ef | grep qemu libvirt+ 7741 1 6 21:37 ? 00:01:03 /usr/bin/qemu-system-x86_64 -name instance-00000006 -S -machine pc-i440fx-xenial,accel=tcg,usb=off -cpu Haswell-noTSX,+abm,+pdpe1gb,+hypervisor,+rdrand,+f16c,+osxsave,+vmx,+ss,+vme -m 64 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 1f8ad333-8d06-4609-b171-dfba1da6e790 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=14.1.0,serial=43736bb5-0fa4-48f4-86c3-3271c47f8b55,uuid=1f8ad333-8d06-4609-b171-dfba1da6e790,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-instance-00000006/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/instances/1f8ad333-8d06-4609-b171-dfba1da6e790/disk,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -chardev file,id=charserial0,path=/var/lib/nova/instances/1f8ad333-8d06-4609-b171-dfba1da6e790/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on
Run basic lifecycle operation on the VM (e.g. query, stop, start, delete). See Lab 09.
Before moving on, use nova list to make sure you have 1 instance running. If you do not, use nova boot to bring it up.
$ nova list +--------------------------------------+-------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------+--------+------------+-------------+----------+ | 1f8ad333-8d06-4609-b171-dfba1da6e790 | my-vm | ACTIVE | - | Running | | +--------------------------------------+-------+--------+------------+-------------+----------+
List the existing projects, using openstack project list
. You will see the built-in admin
project and the internal service
project.
$ openstack project list +----------------------------------+---------+ | ID | Name | +----------------------------------+---------+ | e766f5933dcd45d2bd022a6ef3e01133 | admin | | 206cdfa865ce449aade08f7d54611078 | service | +----------------------------------+---------+
Let's create new custom project, named students
:
$ openstack project create --domain default --description "SCGC Students" students +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | SCGC Students | | domain_id | default | | enabled | True | | id | de60f57b87f14b0e98efcb762fa414a4 | | is_domain | False | | name | students | | parent_id | default | +-------------+----------------------------------+
Let's also set a quota of maximum 1 instance for this project:
$ openstack quota set --instances 1 students
List the current users, using openstack user list
. You will see the built-in admin
user and the internal users nova
and glance
.
$ openstack user list +----------------------------------+--------+ | ID | Name | +----------------------------------+--------+ | ead10d46034242b0a1b698756d4cce25 | admin | | c739a4c0f45541f2b598ed51d00640bd | glance | | 9e877d7ba5d640ae9ca801cb94bd3543 | nova | +----------------------------------+--------+
Let's create 2 more users, named student1
and student2
. The password will be student
:
$ openstack user create --domain default --description "Student One" --password student student1 +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | description | Student One | | domain_id | default | | enabled | True | | id | db7da55870f34a418a581fa4d04c9e6f | | name | student1 | | password_expires_at | None | +---------------------+----------------------------------+ $ openstack user create --domain default --description "Student Two" --password student student2 +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | description | Student Two | | domain_id | default | | enabled | True | | id | d8e8dccfeb1b48428951743c147a50b5 | | name | student2 | | password_expires_at | None | +---------------------+----------------------------------+
Also, create OpenStack RC files for the 2 users:
$ cat student1-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=students export OS_USERNAME=student1 export OS_PASSWORD=student export OS_AUTH_URL=http://newton:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 $ cat student2-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=students export OS_USERNAME=student2 export OS_PASSWORD=student export OS_AUTH_URL=http://newton:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
List the existing roles using openstack role list
. You will the built-in roles admin
and _member_
:
$ openstack role list +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 86527e5f7b304bcab4d2ff79abd9dfcb | admin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | +----------------------------------+----------+
Let's create another role, called student
:
$ openstack role create student +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | None | | id | 8bed042dbbb2499e831136fb22533d79 | | name | student | +-----------+----------------------------------+
For assigning a user to a project, we just need to add the same role to the user and the project:
$ openstack role add --project students --user student1 student $ openstack role add --project students --user student2 student
Verify the role assignments:
$ openstack role assignment list --names +---------+------------------+-------+------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +---------+------------------+-------+------------------+--------+-----------+ | admin | admin@Default | | admin@Default | | False | | admin | glance@Default | | service@Default | | False | | admin | nova@Default | | service@Default | | False | | student | student1@Default | | students@Default | | False | | student | student2@Default | | students@Default | | False | +---------+------------------+-------+------------------+--------+-----------+
Let's see what happens when we do actions as other users.
First, authenticate as student1
and list the instances:
$ source student1-openrc $ nova list +----+------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +----+------+--------+------------+-------------+----------+ +----+------+--------+------------+-------------+----------+
You will not be able to see the created instance, because it is in another project. Also, seeing all the instances is not possible, because you are not admin:
$ nova list --all ERROR (Forbidden): Policy doesn't allow os_compute_api:servers:detail:get_all_tenants to be performed. (HTTP 403) (Request-ID: req-5a445b40-a7cd-4693-b28d-5c416a14ea41)
Boot an instance:
$ nova boot --flavor m1.nano --image cirros --nic none student1-vm $ nova list +--------------------------------------+-------------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------+--------+------------+-------------+----------+ | 7198320d-60a8-4571-815b-7727bb65fd51 | student1-vm | ACTIVE | - | Running | | +--------------------------------------+-------------+--------+------------+-------------+----------+
Authenticate as student2
and list the instances:
$ source student2-openrc $ nova list +--------------------------------------+-------------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------+--------+------------+-------------+----------+ | 7198320d-60a8-4571-815b-7727bb65fd51 | student1-vm | ACTIVE | - | Running | | +--------------------------------------+-------------+--------+------------+-------------+----------+
Your are able to see the instance, because it is in the same project, even if it was started by another user.
Try to boot a new one:
$ nova boot --flavor m1.nano --image cirros --nic none student2-vm ERROR (Forbidden): Quota exceeded for instances: Requested 1, but already used 1 of 1 instances (HTTP 403) (Request-ID: req-d9072d47-13da-4294-8b42-1cf8faddbbc7)
You are not allowed, because you exceeded the quota.
Delete the instance created by student1
:
$ nova delete 7198320d-60a8-4571-815b-7727bb65fd51 Request to delete server 7198320d-60a8-4571-815b-7727bb65fd51 has been accepted.
Then, create your own instance:
$ nova boot --flavor m1.nano --image cirros --nic none student2-vm $ nova list +--------------------------------------+-------------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------+--------+------------+-------------+----------+ | 76d8580f-7c62-42ee-8798-d93716288bb2 | student2-vm | ACTIVE | - | Running | | +--------------------------------------+-------------+--------+------------+-------------+----------+
Now, login as admin
and list all the instances:
$ source admin-openrc $ nova list --all +--------------------------------------+-------------+----------------------------------+--------+------------+-------------+----------+ | ID | Name | Tenant ID | Status | Task State | Power State | Networks | +--------------------------------------+-------------+----------------------------------+--------+------------+-------------+----------+ | 1f8ad333-8d06-4609-b171-dfba1da6e790 | my-vm | e766f5933dcd45d2bd022a6ef3e01133 | ACTIVE | - | Running | | | 76d8580f-7c62-42ee-8798-d93716288bb2 | student2-vm | de60f57b87f14b0e98efcb762fa414a4 | ACTIVE | - | Running | | +--------------------------------------+-------------+----------------------------------+--------+------------+-------------+----------+
Delete all the instances:
$ nova delete 1f8ad333-8d06-4609-b171-dfba1da6e790 Request to delete server 1f8ad333-8d06-4609-b171-dfba1da6e790 has been accepted. $ nova delete 76d8580f-7c62-42ee-8798-d93716288bb2 Request to delete server 76d8580f-7c62-42ee-8798-d93716288bb2 has been accepted. $ nova list +----+------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +----+------+--------+------------+-------------+----------+ +----+------+--------+------------+-------------+----------+
We want to define a special role, called glanceadmin
that will be able to add and delete images in Glance.
First, let's create the role:
$ openstack role create glanceadmin +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | None | | id | ba3e153d237542b98504dd2e3c083a61 | | name | glanceadmin | +-----------+----------------------------------+
Then, edit the Glance policy file /etc/glance/policy.json
so that only this role will be allowed to add or delete images:
... "add_image": "role:glanceadmin", "delete_image": "role:glanceadmin", ...
Now, try to delete the cirros
image:
$ openstack image delete cirros Failed to delete image with name or ID 'cirros': 403 Forbidden You are not authorized to complete delete_image action. (HTTP 403) Failed to delete 1 of 1 images.
You are not allowed, even if you are admin
.
Create a new user, named glanceguru
:
$ openstack user create --domain default --description "Glance Guru" --password gg glanceguru +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | description | Glance Guru | | domain_id | default | | enabled | True | | id | 0c572b48b242461285a5a1d74eec4244 | | name | glanceguru | | password_expires_at | None | +---------------------+----------------------------------+
Assign the glanceadmin
role to the glanceguru
user:
$ openstack role add --project admin --user glanceguru glanceadmin $ openstack role assignment list --names +-------------+--------------------+-------+------------------+---------+-----------+ | Role | User | Group | Project | Domain | Inherited | +-------------+--------------------+-------+------------------+---------+-----------+ | admin | admin@Default | | admin@Default | | False | | admin | glance@Default | | service@Default | | False | | admin | nova@Default | | service@Default | | False | | student | student1@Default | | students@Default | | False | | student | student2@Default | | students@Default | | False | | glanceadmin | glanceguru@Default | | admin@Default | | False | +-------------+--------------------+-------+------------------+---------+-----------+
Create the OpenStack RC file:
$ cat gg-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=glanceguru export OS_PASSWORD=gg export OS_AUTH_URL=http://newton:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
Login as glanceguru
and try to delete the image:
$ source gg-openrc
$ openstack image delete cirros
$ openstack image list
Now, try to upload it again:
$ openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public 403 Forbidden You are not authorized to complete publicize_image action. (HTTP 403)
Because you specified the public
flag, you need to allow one more action in the policy:
... "publicize_image": "role:glanceadmin", ...
$ openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | f8ab98ff5e73ebab884d80c9dc9c7290 | | container_format | bare | | created_at | 2018-05-19T22:14:56Z | | disk_format | qcow2 | | file | /v2/images/928e2f5a-3983-45d2-b1c1-1a8960fce7dc/file | | id | 928e2f5a-3983-45d2-b1c1-1a8960fce7dc | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | e766f5933dcd45d2bd022a6ef3e01133 | | protected | False | | schema | /v2/schemas/image | | size | 13267968 | | status | active | | tags | | | updated_at | 2018-05-19T22:14:56Z | | virtual_size | None | | visibility | public | +------------------+------------------------------------------------------+
Install Horizon (the OpenStack dashboard).
For testing, launch firefox
from the command-line on fep8.grid.pub.ro
. Make sure to use compression (add the -C
flag to the ssh
command).