This is an old revision of the document!
nova
, neutron
, keystone
etc.)fep.grid.pub.ro
, so we'll use them from there.Before using the clients, we must provide the necessary authentication parameters. This is done via an OpenStack RC file. To obtain your OpenStack RC from Horizon (the OpenStack dashboard), go to Project → Compute → Access & Security → API Access and click on Download OpenStack RC file.
Upload the file in your home directory on fep.grid.pub.ro
and source it in Bash:
$ source alexandru.carp_prj-openrc.sh Please enter your OpenStack Password:
Enter your password, and for verifying that authentication is correct, enter any OpenStack command. For example, list the catalog of installed services, using openstack catalog list:
$ openstack catalog list +---------------+----------------+---------------------------------------------------------------------------------------------+ | Name | Type | Endpoints | +---------------+----------------+---------------------------------------------------------------------------------------------+ | designate | dns | NCIT | | | | publicURL: http://172.16.5.161:9001/v1 | | | | internalURL: http://172.16.5.161:9001/v1 | | | | adminURL: http://172.16.5.161:9001/v1 | | | | | | nova | compute | NCIT | | | | publicURL: http://cloud-controller.grid.pub.ro:8774/v2/e81c0aa57f61461c8da8496157f7041e | | | | internalURL: http://cloud-controller.grid.pub.ro:8774/v2/e81c0aa57f61461c8da8496157f7041e | | | | adminURL: http://cloud-controller.grid.pub.ro:8774/v2/e81c0aa57f61461c8da8496157f7041e | | | | | [...]
For booting an instance, we must know the following parameters (objects):
For each of the above objects, we will list what is available in our own OpenStack tenant.
Images are handled by the Glance client. We will list them using glance image-list:
$ glance image-list +--------------------------------------+---------------------------------------------+ | ID | Name | +--------------------------------------+---------------------------------------------+ | 53fec0b8-753e-4a4f-91a3-51624a8a270d | ABD Oracle Template v1 | | 6349c723-de9e-4d3a-900b-b07416e5e486 | ABD Template v3 | | 56f8b431-d7be-43d1-966e-9c974fe20c8f | ASCG/CCG Template v2 | | 1aa5c205-9dde-4382-b7d7-8b5d652e38b8 | Centos 6 | | c3dd305d-84e1-4e72-9a15-8194a4aafef3 | Centos 7 | | 43259067-c2f1-438f-a16a-c4ad86dc2ad2 | Cisco onePK-1.3.0.181 | | 6a9c3513-d5d6-48f8-9600-e6afe9ac6686 | Cloudera Hadoop 5.7.1 | | 692e7272-9961-48d2-b54f-9f7cf3a45262 | Debian 8.6.0 | | f9a48d01-9123-4fb1-b989-908ac414a339 | GSR Template (Debian 8.6.0) | | 4a69c6e9-fe89-409f-931a-41c75c12de3c | IBM-SDP | | c667704b-d650-4f8c-b08f-ba05583d8428 | ISC Temaplate v2.1 | | aa935990-0751-40f6-b40a-c9e9017e939e | ISC Template v2 | | 9481ec27-6897-4c3e-87ec-48977cb66164 | ISRM Template v2 | | 0d82a5c3-1141-45f0-83c3-e6c8157fc511 | Openstack Juno | | ad74e12d-e883-4338-a1e6-78d2e0eb3a24 | RL 2016 | | c3e91d38-0e7e-4506-a922-c4911bc9fca9 | RL 2016 Tema2 | | 31b91232-d87c-44ba-8abe-d99580a6375f | SCGC Template v1 | | 7c01be41-0973-4cec-b00f-9c7116c9f885 | SO VM Linux v1 | | c724142c-75ee-45b1-9f47-b752011d9bbc | STR - Win7 | | 3672370a-af54-47c2-b2c1-d9875952415f | Ubuntu 16.04 Xenial | | 7aae1675-571f-487b-9526-14a1ae038bc3 | Ubuntu 16.04 Xenial (32bit) | | 7f283e7e-b347-408b-8521-daeec831b456 | USO Practic Template (Ubuntu 16.04 - 32bit) | | 2eeb7d33-fc5a-4f71-b3ae-bfef6b1ce9dd | USO Template (Ubuntu 16.04 - 32bit) | | 9885d828-78d2-4804-b816-b7072aa4e08a | WinXP SCPI | +--------------------------------------+---------------------------------------------+
For booting the instance, we will use the Ubuntu 16.04 Xenial image, which has the ID 3672370a-af54-47c2-b2c1-d9875952415f. Let's find some more information about this image using glance image-show:
$ glance image-show 3672370a-af54-47c2-b2c1-d9875952415f +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | 02f5162d90e1a620177c3075266f734b | | container_format | bare | | created_at | 2016-10-24T17:50:20Z | | disk_format | qcow2 | | id | 3672370a-af54-47c2-b2c1-d9875952415f | | min_disk | 0 | | min_ram | 0 | | name | Ubuntu 16.04 Xenial | | owner | 1836fc3aec3f4226a73bb5e249385fe0 | | protected | True | | size | 313982976 | | status | active | | tags | [] | | updated_at | 2016-10-29T13:26:14Z | | virtual_size | None | | visibility | public | +------------------+--------------------------------------+
Flavors are handled in Nova (the compute service). We will list them using nova flavor-list:
$ nova flavor-list +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | 3c183fea-cea6-489e-8b6e-d34c4bf073ec | m1.tiny | 512 | 8 | 0 | | 1 | 1.0 | True | | 3e708b91-53c8-436c-9f6f-653f9a403481 | m1.medium | 1536 | 16 | 0 | | 1 | 1.0 | True | | 443d714c-f295-4c92-b75e-96ae99a64fc4 | m1.large | 4096 | 10 | 0 | | 2 | 1.0 | True | | 4d76ded7-fae0-4fd4-9191-600a466a5fea | c1.large | 4096 | 16 | 0 | | 4 | 1.0 | True | | 5b1624ef-30b3-4eba-bfe4-0a1dc5211594 | c1.small | 1024 | 16 | 0 | | 1 | 1.0 | True | | 77c51a45-3f34-45e7-b9d9-84ef1266be83 | c1.medium | 1536 | 16 | 0 | | 2 | 1.0 | True | | a8578051-c828-4446-97d0-da34b8877348 | m1.xlarge | 4096 | 24 | 0 | 2048 | 4 | 1.0 | True | | d1ddad6d-3a87-4f4f-a460-ec2c324d42b7 | m1.small | 1024 | 10 | 0 | | 1 | 1.0 | True | +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
Let's find more information about the m1.tiny flavor, having ID 3c183fea-cea6-489e-8b6e-d34c4bf073ec, using nova flavor-show:
$ nova flavor-show 3c183fea-cea6-489e-8b6e-d34c4bf073ec +----------------------------+--------------------------------------+ | Property | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 8 | | extra_specs | {"type": "gp"} | | id | 3c183fea-cea6-489e-8b6e-d34c4bf073ec | | name | m1.tiny | | os-flavor-access:is_public | True | | ram | 512 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+--------------------------------------+
Keypair are also handled by Nova. To list them, we use nova keypair-list. You should only see your own keypair(s):
$ nova keypair-list +------+-------------------------------------------------+ | Name | Fingerprint | +------+-------------------------------------------------+ | fep | ee:ed:db:ae:e3:09:f9:a0:f2:6f:4f:47:4a:15:14:e4 | +------+-------------------------------------------------+
Using nova keypair-show, you can see the details, including the public key:
$ nova keypair-show fep +-------------+-------------------------------------------------+ | Property | Value | +-------------+-------------------------------------------------+ | created_at | 2017-02-27T09:06:16.000000 | | deleted | False | | deleted_at | - | | fingerprint | ee:ed:db:ae:e3:09:f9:a0:f2:6f:4f:47:4a:15:14:e4 | | id | 2433 | | name | fep | | updated_at | - | | user_id | alexandru.carp | +-------------+-------------------------------------------------+ Public key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqnHjfEFfB6n6CbF5a4wQpnxZkavrJCuX1ivjNoGjUmEa9dn0GS+YB+bWs8Nny8cgNkgzRE1jFcIZ2ByNxahf884G2QNZm+9tufWl3V0GdqZ+sooi5Fry9BGv/DHyRw3/y+w9xSfOoS8pFl/lV3jOfZYEWLRTwVT63SOx1sjOOMJtBxr6IyjHzVWErKlJymuxa7R5u4YqqeCNpqNYCQZqvbbY6iM9Rd4WbmEQgTtpmM6TLE2mpaD9MTeFsoiQxPhCTGCr1EZsLtIdPkowMAxCvVEQ7GU7p4R/8WZtKpujNLboFyZkZm7Ku0JgNdNEc+sl5YzxS9E6BkxlIl1xefEFP alexandru.carp@fep7-1.grid.pub.ro
Networks are handled by Neutron. We use neutron net-list to list all the networks:
$ neutron net-list +--------------------------------------+--------+--------------------------------------------------+ | id | name | subnets | +--------------------------------------+--------+--------------------------------------------------+ | fc56b8a7-d6ea-4025-8fba-0f868499e20b | Net224 | ec56c500-2508-46d9-b2bd-a601cd7c1565 | | 525c2933-8a11-4cfc-ad12-234cac7c9328 | Net240 | f42d03c0-d5e4-4769-b655-1a59144b01e5 | | 424666ed-c0e8-4d1c-96fe-c22c56262a87 | vlan9 | 3f7ca0ff-7855-4f12-b6b4-4c4a763aa22f 10.9.0.0/16 | +--------------------------------------+--------+--------------------------------------------------+
Let's show details about:
$ neutron net-show 424666ed-c0e8-4d1c-96fe-c22c56262a87 +-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | admin_state_up | True | | id | 424666ed-c0e8-4d1c-96fe-c22c56262a87 | | mtu | | | name | vlan9 | | router:external | True | | shared | True | | status | ACTIVE | | subnets | 3f7ca0ff-7855-4f12-b6b4-4c4a763aa22f | | tenant_id | 975459f77464498898a1b17c8f08c8d4 | +-----------------+--------------------------------------+
$ neutron subnet-show 3f7ca0ff-7855-4f12-b6b4-4c4a763aa22f +-------------------+------------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------+ | allocation_pools | {"start": "10.9.0.100", "end": "10.9.255.254"} | | cidr | 10.9.0.0/16 | | dns_nameservers | 141.85.241.15 | | enable_dhcp | True | | gateway_ip | 10.9.0.1 | | host_routes | | | id | 3f7ca0ff-7855-4f12-b6b4-4c4a763aa22f | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | 10_9 | | network_id | 424666ed-c0e8-4d1c-96fe-c22c56262a87 | | subnetpool_id | | | tenant_id | 975459f77464498898a1b17c8f08c8d4 | +-------------------+------------------------------------------------+
Security groups are also handled by Neutron, so we'll use neutron security-group-list.
$ neutron security-group-list +--------------------------------------+---------+----------------------------------------------------------------------+ | id | name | security_group_rules | +--------------------------------------+---------+----------------------------------------------------------------------+ | 8ef0e4b7-4543-48da-b304-00f74c6e20c4 | default | egress, IPv4 | | | | egress, IPv6 | | | | ingress, IPv4, 10000-40000/tcp, remote_ip_prefix: 0.0.0.0/0 | | | | ingress, IPv4, 22/tcp, remote_ip_prefix: 0.0.0.0/0 | | | | ingress, IPv4, 3389/tcp, remote_ip_prefix: 0.0.0.0/0 | | | | ingress, IPv4, 4000/tcp, remote_ip_prefix: 0.0.0.0/0 | | | | ingress, IPv4, 443/tcp, remote_ip_prefix: 0.0.0.0/0 | | | | ingress, IPv4, 5901/tcp, remote_ip_prefix: 0.0.0.0/0 | | | | ingress, IPv4, 80/tcp, remote_ip_prefix: 0.0.0.0/0 | | | | ingress, IPv4, 8080/tcp, remote_ip_prefix: 0.0.0.0/0 | | | | ingress, IPv4, remote_group_id: 8ef0e4b7-4543-48da-b304-00f74c6e20c4 | | | | ingress, IPv6, remote_group_id: 8ef0e4b7-4543-48da-b304-00f74c6e20c4 | +--------------------------------------+---------+----------------------------------------------------------------------+
For very verbose details, use neutron security-group-show and the ID of the security group:
$ neutron security-group-show 8ef0e4b7-4543-48da-b304-00f74c6e20c4 +----------------------+--------------------------------------------------------------------+ | Field | Value | +----------------------+--------------------------------------------------------------------+ | description | Default security group | | id | 8ef0e4b7-4543-48da-b304-00f74c6e20c4 | | name | default | | security_group_rules | { | | | "remote_group_id": null, | | | "direction": "ingress", | | | "remote_ip_prefix": "0.0.0.0/0", | | | "protocol": "tcp", | | | "tenant_id": "e81c0aa57f61461c8da8496157f7041e", | | | "port_range_max": 4000, | | | "security_group_id": "8ef0e4b7-4543-48da-b304-00f74c6e20c4", | | | "port_range_min": 4000, | | | "ethertype": "IPv4", | | | "id": "013de936-4790-477c-98bf-631ae252e60a" | | | } | [...]
Finally, after listing all the parameters, we can boot the instance. We will use:
For booting, we use nova boot:
$ nova boot --flavor m1.tiny --image 3672370a-af54-47c2-b2c1-d9875952415f\ --nic net-id=424666ed-c0e8-4d1c-96fe-c22c56262a87 --security-group default --key-name fep scgc +--------------------------------------+------------------------------------------------------------+ | Property | Value | +--------------------------------------+------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00007e32 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | MSM9keuqK8zB | | config_drive | | | created | 2018-05-14T20:37:28Z | | flavor | m1.tiny (3c183fea-cea6-489e-8b6e-d34c4bf073ec) | | hostId | | | id | acd0fcdd-ac58-49b5-ad04-bf34cc7af4a7 | | image | Ubuntu 16.04 Xenial (3672370a-af54-47c2-b2c1-d9875952415f) | | key_name | fep | | metadata | {} | | name | scgc | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | e81c0aa57f61461c8da8496157f7041e | | updated | 2018-05-14T20:37:29Z | | user_id | alexandru.carp | +--------------------------------------+------------------------------------------------------------+
In Horizon, follow the state of the booted instance.
In this section, we will perform various operations regarding the lifecycle of an instance.
We can use nova list for listing all instances:
$ nova list +--------------------------------------+------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+--------------------+ | acd0fcdd-ac58-49b5-ad04-bf34cc7af4a7 | scgc | ACTIVE | - | Running | vlan9=10.9.119.119 | +--------------------------------------+------+--------+------------+-------------+--------------------+
With nova show, we can get details:
$ nova show acd0fcdd-ac58-49b5-ad04-bf34cc7af4a7 +--------------------------------------+------------------------------------------------------------+ | Property | Value | +--------------------------------------+------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | GP | | OS-EXT-SRV-ATTR:host | quad-wn20.grid.pub.ro | | OS-EXT-SRV-ATTR:hypervisor_hostname | quad-wn20.grid.pub.ro | | OS-EXT-SRV-ATTR:instance_name | instance-00007e32 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2018-05-14T20:37:54.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2018-05-14T20:37:28Z | | flavor | m1.tiny (3c183fea-cea6-489e-8b6e-d34c4bf073ec) | | hostId | e1774fd65778cdf1c7aaeb0240bdf46d071197d067358e6fea3e09e8 | | id | acd0fcdd-ac58-49b5-ad04-bf34cc7af4a7 | | image | Ubuntu 16.04 Xenial (3672370a-af54-47c2-b2c1-d9875952415f) | | key_name | fep | | metadata | {} | | name | scgc | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id | e81c0aa57f61461c8da8496157f7041e | | updated | 2018-05-14T20:37:54Z | | user_id | alexandru.carp | | vlan9 network | 10.9.119.119 | +--------------------------------------+------------------------------------------------------------+
Test connectivity to the instance using ping and ssh (user ubuntu).
$ ping <INSTANCE IP> $ ssh ubuntu@<INSTANCE IP>
For stopping the instance, without deleting it, we can use the nova stop command. This is the equivalent of shutting down the instance.
$ nova stop <INSTANCE ID> Request to stop server acd0fcdd-ac58-49b5-ad04-bf34cc7af4a7 has been accepted.
$ nova list +--------------------------------------+------+---------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+---------+------------+-------------+--------------------+ | acd0fcdd-ac58-49b5-ad04-bf34cc7af4a7 | scgc | SHUTOFF | - | Shutdown | vlan9=10.9.119.119 | +--------------------------------------+------+---------+------------+-------------+--------------------+
After being stopped, an instance can be started with the nova start command:
$ nova start <INSTANCE ID> Request to start server acd0fcdd-ac58-49b5-ad04-bf34cc7af4a7 has been accepted.
$ nova list +--------------------------------------+------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+--------------------+ | acd0fcdd-ac58-49b5-ad04-bf34cc7af4a7 | scgc | ACTIVE | - | Running | vlan9=10.9.119.119 | +--------------------------------------+------+--------+------------+-------------+--------------------+
After starting the instance:
Terminate the instance with the nova delete command:
$ nova delete <INSTANCE ID> Request to delete server acd0fcdd-ac58-49b5-ad04-bf34cc7af4a7 has been accepted.
After that, the instance should not appear in nova list any more:
$ nova list +----+------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +----+------+--------+------------+-------------+----------+ +----+------+--------+------------+-------------+----------+
We want to create a topology of 2 VMs (a client and a server), connected through a private network. Each VM should also have a management connection to the vlan9 network:
+--------+ mynetwork +--------+ | client |-------------------------| server | +--------+ 172.16.1.0/24 +--------+ | | | | | vlan9 | vlan9
Create the network using the neutron net-create command:
$ neutron net-create mynetwork Created a new network: +-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | admin_state_up | True | | id | 20a1cce9-9adc-48d3-bb55-3917dd3fbdea | | mtu | 0 | | name | mynetwork | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | e81c0aa57f61461c8da8496157f7041e | +-----------------+--------------------------------------+
Verify it was successfully created using:
$ neutron net-show <NETWORK ID> +-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | admin_state_up | True | | id | 20a1cce9-9adc-48d3-bb55-3917dd3fbdea | | mtu | 0 | | name | mynetwork | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | e81c0aa57f61461c8da8496157f7041e | +-----------------+--------------------------------------+
The next step is to create a subnet for mynetwork. We will use neutron subnet-create and:
$ neutron subnet-create --name mysubnet --no-gateway mynetwork 172.16.1.0/24 Created a new subnet: +-------------------+------------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------+ | allocation_pools | {"start": "172.16.1.1", "end": "172.16.1.254"} | | cidr | 172.16.1.0/24 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | | | host_routes | | | id | b5b278b8-5d80-4285-9efd-094c6481b6e1 | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | mysubnet | | network_id | 20a1cce9-9adc-48d3-bb55-3917dd3fbdea | | subnetpool_id | | | tenant_id | e81c0aa57f61461c8da8496157f7041e | +-------------------+------------------------------------------------+
Verify the subnet was successfully created using Horizon, in Project → Network → Networks → mynetwork → Subnets
We will boot the instances according to the topology. Note that each instance will have 2 vNICs:
$ nova boot --flavor m1.tiny --image 3672370a-af54-47c2-b2c1-d9875952415f --nic net-id=424666ed-c0e8-4d1c-96fe-c22c56262a87 --nic net-id=<mynetwork ID> --security-group default --key-name <keypair> client $ nova boot --flavor m1.tiny --image 3672370a-af54-47c2-b2c1-d9875952415f --nic net-id=424666ed-c0e8-4d1c-96fe-c22c56262a87 --nic net-id=<mynetwork ID> --security-group default --key-name <keypair> server
We need to login via SSH on each instance and trigger the DHCP client for the second NIC:
$ sudo dhclient ens4
Verify that each instance gets the correct IP address and verify connectivity using ping.
Delete the resources in the reverse order:
$ nova delete <client instance ID> $ nova delete <server instance ID>
$ neutron net-delete <mynetwork ID>
Verify that the resources were deleted using nova list and neutron net-list.
Using orchestration, we can create multiple cloud objects through a single operation. For this, we need an additional object, called stack. The service that handles orchestration in OpenStack is Heat.
We will define a new stack that deploys 3 Ubuntu VMs at the same time. For this, go to Project → Orchestration → Stacks and click on Launch Stack
For Template source, upload a file with the following content (substitute <KEYPAIR NAME> with your own keypair name):
heat_template_version: 2013-05-23 resources: vm1: type: OS::Nova::Server properties: name: vm1 image: 3672370a-af54-47c2-b2c1-d9875952415f flavor: m1.tiny key_name: <KEYPAIR NAME> networks: - network: vlan9 vm2: type: OS::Nova::Server properties: name: vm2 image: 3672370a-af54-47c2-b2c1-d9875952415f flavor: m1.tiny key_name: <KEYPAIR NAME> networks: - network: vlan9 vm3: type: OS::Nova::Server properties: name: vm3 image: 3672370a-af54-47c2-b2c1-d9875952415f flavor: m1.tiny key_name: <KEYPAIR NAME> networks: - network: vlan9
After the stack is created: