Nova VNC flows under the hood

Most OpenStack deployments has a VNC console implemented with nova-novncproxy. This service gives the final user the ability to log into their instances in a web based method through a browser.

At this post i’m going to show how a vnc console request works under the hood while using the following command or lauching a vnc session through Horizon.

# nova get-vnc-console INSTANCE novnc

First of all, a user connects to NOVA and issues a VNC console request for an instance. Nova API needs to validate the user issuing an authentication request to keystone.
The user receives a token with nova’s endpoint URL in the catalog, with that endpoint and the token, the user makes a request against nova calling for a VNC session.

GET http://192.168.200.208:5000/v2.0 -H "Accept: application/json" -H \
"User-Agent: python-keystoneclient"

GET http://192.168.200.208:8774/v2/ -H "User-Agent: python-novaclient" -H \
"Accept: application/json" -H "X-Auth-Token: {SHA1}3b6262df9eaba5da33c1004805187806322201f1"

If a name instead of an instance ID is used in the request, Nova need to check his database to match that name with his corresponding ID, as we can see in the following request.

GET http://192.168.200.208:8774/v2/ee84411cdb8148d28674b129ef482f31/servers?name=test1 \
-H "User-Agent: python-novaclient" -H "Accept: application/json" \
-H "X-Auth-Token: {SHA1}3b6262df9eaba5da33c1004805187806322201f1"

RESP BODY: {"servers": [{"id": "9165dbda-f54e-4186-b2cb-e6ca05ac53ee", \
"links": [{"href": "http://192.168.200.208:8774/v2/ee84411cdb8148d28674b129ef482f31/servers/9165dbda-f54e-4186-b2cb-e6ca05ac53ee", "rel": "self"},\
 {"href": "http://192.168.200.208:8774/ee84411cdb8148d28674b129ef482f31/servers/9165dbda-f54e-4186-b2cb-e6ca05ac53ee", \
"rel": "bookmark"}], "name": "test1"}]}

Once the ID is matched with the name, Nova check information about the instance (I thought it was to validate if is in ACTIVE status, but i realized that even when is in STOPPED status the request is made it anyway).

GET http://192.168.200.208:8774/v2/ee84411cdb8148d28674b129ef482f31/servers/9165dbda-f54e-4186-b2cb-e6ca05ac53ee\
 -H "User-Agent: python-novaclient" -H "Accept: application/json" \
 -H "X-Auth-Token: {SHA1}3b6262df9eaba5da33c1004805187806322201f1"

RESP BODY: {"server": {"status": "ACTIVE", "updated": "2016-03-02T17:28:45Z", "hostId": "ca3a874dcad9079fcc6a0b10b0e2efaa394bc66b5335197fdd9c2498", "OS-EXT-SRV-ATTR:host": "liberty", "addresses": {"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:aa:1c:32", "version": 4, "addr": "10.0.0.6", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.200.208:8774/v2/ee84411cdb8148d28674b129ef482f31/servers/9165dbda-f54e-4186-b2cb-e6ca05ac53ee", "rel": "self"}, {"href": "http://192.168.200.208:8774/ee84411cdb8148d28674b129ef482f31/servers/9165dbda-f54e-4186-b2cb-e6ca05ac53ee", "rel": "bookmark"}], "key_name": null, "image": {"id": "bf31eadd-c5f4-40f8-9ddb-30f688ca5e5f", "links": [{"href": "http://192.168.200.208:8774/ee84411cdb8148d28674b129ef482f31/images/bf31eadd-c5f4-40f8-9ddb-30f688ca5e5f", "rel": "bookmark"}]}, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-SRV-USG:launched_at": "2016-03-02T17:28:45.000000", "OS-EXT-SRV-ATTR:hypervisor_hostname": "liberty", "flavor": {"id": "1", "links": [{"href": "http://192.168.200.208:8774/ee84411cdb8148d28674b129ef482f31/flavors/1", "rel": "bookmark"}]}, "id": "9165dbda-f54e-4186-b2cb-e6ca05ac53ee", "security_groups": [{"name": "default"}], "OS-SRV-USG:terminated_at": null, "OS-EXT-AZ:availability_zone": "nova", "user_id": "d9164a323be649c0a8c5c80fdd5bd585", "name": "test1", "created": "2016-03-02T17:28:34Z", "tenant_id": "ee84411cdb8148d28674b129ef482f31", "OS-DCF:diskConfig": "MANUAL", "os-extended-volumes:volumes_attached": [], "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, "config_drive": "", "metadata": {}}}

When we get the information, nova-api POST a request to nova-consoleauth for a VNC console.

POST http://192.168.200.208:8774/v2/ee84411cdb8148d28674b129ef482f31/servers/9165dbda-f54e-4186-b2cb-e6ca05ac53ee/action \
-H "User-Agent: python-novaclient" -H "Content-Type: application/json" \
-H "Accept: application/json" -H "X-Auth-Token: {SHA1}3b6262df9eaba5da33c1004805187806322201f1"\
-d '{"os-getVNCConsole": {"type": "novnc"}}'


DEBUG nova.api.openstack.wsgi [req-2201b9d6-5711-46d3-ac4d-669094f07527 \
d9164a323be649c0a8c5c80fdd5bd585 ee84411cdb8148d28674b129ef482f31 - - -] \
Action: 'action', calling method: , body: {"os-getVNCConsole": {"type": "novnc"}} \
_process_stack /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:789

Nova-consoleauth receives the console request and create an access URL while generates a temporary token for the vnc console.

INFO nova.consoleauth.manager [req-d4def6f9-1ab9-4626-b6a8-d81643ea5eb4 d9164a323be649c0a8c5c80fdd5bd585 ee84411cdb8148d28674b129ef482f31 - - -] \
Received Token: 3dfcd011-28f1-4cf3-8f5c-8cd18de4560e, \
{'instance_uuid': u'9165dbda-f54e-4186-b2cb-e6ca05ac53ee', \
'access_url': u'http://192.168.200.208:6080/vnc_auto.html?token=3dfcd011-28f1-4cf3-8f5c-8cd18de4560e',\
 'token': u'3dfcd011-28f1-4cf3-8f5c-8cd18de4560e', 'last_activity_at': 1456940028.356214, \
'internal_access_path': None, 'console_type': u'novnc', 'host': u'liberty', 'port': u'5900'}

Nova-consoleauth answer to nova-api who also answers to the user with an access URL.
This URL got the following content on it:

  • HTTP or HTTPS connection to nova-novncproxy IP
  • Nova-novncproxy port
  • A token to validate the VNC connection
RESP BODY: {"console": {"url": "http://192.168.200.208:6080/vnc_auto.html?token=3dfcd011-28f1-4cf3-8f5c-8cd18de4560e", "type": "novnc"}}

+-------+--------------------------------------------------------------------------------------+
| Type  | Url                                                                                  |
+-------+--------------------------------------------------------------------------------------+
| novnc | http://192.168.200.208:6080/vnc_auto.html?token=3dfcd011-28f1-4cf3-8f5c-8cd18de4560e |
+-------+--------------------------------------------------------------------------------------+

Until now, nova-novncproxy service can be stopped or isn’t used at all, is at this point the when proxy server enter into the game.
The user connects through a web browser to the nova-novncproxy’s URL provided by nova before.

DEBUG nova.console.websocketproxy [-] 192.168.200.1: \
new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:828

Nova-vncproxy validate the issued token with the URL against nova-consoleauth.

nova.consoleauth.manager [req-399c7b58-700a-4779-b215-b12d10056813 - - - - -] \
Checking Token: 3dfcd011-28f1-4cf3-8f5c-8cd18de4560e, True

When the token is validated, nova-novncproxy maps compute’s node private IP (at this case port 5900) with the nova-novncproxy public IP(6080 port).

INFO nova.console.websocketproxy [req-399c7b58-700a-4779-b215-b12d10056813 - - - - -]\
   7: connect info: {u'instance_uuid': u'9165dbda-f54e-4186-b2cb-e6ca05ac53ee', u'\
internal_access_path': None, u'last_activity_at': 1456940028.356214, \
u'console_type': u'novnc', u'host': u'liberty', u'token': u'3dfcd011-28f1-4cf3-8f5c-8cd18de4560e', \
u'access_url': u'http://192.168.200.208:6080/vnc_auto.html?token=3dfcd011-28f1-4cf3-8f5c-8cd18de4560e'\
, u'port': u'5900'}

We can see how the python novncproxy process binds both IPs/port.

# ps aux | grep vnc
nova     14840  1.2  0.7 362096 41000 ?        S    18:53   0:14 /usr/bin/python2 /usr/bin/nova-novncproxy --web /usr/share/novnc/

# netstat -putona | grep 14840
tcp        0      0 192.168.200.208:6080    192.168.200.1:59918     ESTABLISHED 14840/python2        keepalive (3,13/0/0)
tcp        0      0 192.168.122.73:57764    192.168.122.73:5900     ESTABLISHED 14840/python2        keepalive (3,13/0/0)

Nova-novncproxy starts the connection between the instance and user’s browser session.

INFO nova.console.websocketproxy [req-399c7b58-700a-4779-b215-b12d10056813 - - - - -]\
   7: connecting to: liberty:5900

Libvirt connects a vnc console into the instance, as we can see at the xml provided by virsh command.
Also, port 5900 now is binded at qemu-kvm process.


# virsh dumpxml 2
...
<graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0' keymap='en-us'>
     <listen type='address' address='0.0.0.0'/>
   </graphics>
...
# netstat -putona | grep 5900
tcp        0      0 0.0.0.0:5900            0.0.0.0:*               LISTEN      5910/qemu-kvm        off (0.00/0/0)
tcp        0      0 192.168.122.73:5900     192.168.122.73:57702    ESTABLISHED 5910/qemu-kvm        off (0.00/0/0)
tcp        0      0 192.168.122.73:57702    192.168.122.73:5900     ESTABLISHED 11118/python2        keepalive (1,92/0/0)

Nova-novncproxy keeps the connection alive until browser session ends.

DEBUG nova.console.websocketproxy [-] \
Reaing zombies, active child count is 1 vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:828

When a token is not valid while authenticating against nova-consoleauth, we can see a message like the following.

INFO nova.console.websocketproxy [req-9164b32d-3ce1-441b-82c7-6c23c9a354d0 - - - - -] \
handler exception: The token '3dfcd011-28f1-4cf3-8f5c-8cd18de4560e' is invalid or has expired

Regards.
Eduardo Gonzalez

Working with affinity/anti-affinity groups OpenStack

In a previous post, you learned how to segregate resources with Availability Zones and Host Aggregates, those methods allows the end user to specify where and on which types of resources their instances should be running.

At this post, you will learn how specify to nova where nova-scheduler should schedule your instances based on two policies. These policies define if instances should share the same hypervisor (affinity rule) or if not depending of user needs(anti-affinity rule).

First, you need to modify nova.conf and allow nova-scheduler to filter based on affinity rules. Add ServerGroupAntiAffinityFilter and ServerGroupAffinityFilter filters to scheduler default filter option.

# vi /etc/nova.conf

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

Restart nova-scheduler to apply changes

systemctl restart openstack-nova-scheduler

Once nova-scheduler has been restarted, we can create a group of servers based on affinity policy (All instances at this group will be launched in the same hypervisor)

nova server-group-create instancestogethergroup affinity
+--------------------------------------+------------------------+---------------+---------+----------+
| Id                                   | Name                   | Policies      | Members | Metadata |
+--------------------------------------+------------------------+---------------+---------+----------+
| 27abe662-c37e-431c-9715-0d2137fc5519 | instancestogethergroup | [u'affinity'] | []      | {}       |
+--------------------------------------+------------------------+---------------+---------+----------+

Now create two instances, add --hint group=GROUP-ID option to specify the group where instances will be members.

nova boot --image a6d7a606-f725-480a-9b1b-7b3ae39b93d4 --flavor m1.tiny --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df --hint group=27abe662-c37e-431c-9715-0d2137fc5519 affinity1
nova boot --image a6d7a606-f725-480a-9b1b-7b3ae39b93d4 --flavor m1.tiny --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df --hint group=27abe662-c37e-431c-9715-0d2137fc5519 affinity2

Ensure the instances are properly mapped to the group.

nova server-group-get 27abe662-c37e-431c-9715-0d2137fc5519 
+--------------------------------------+------------------------+---------------+------------------------------------------------------------------------------------+----------+
| Id                                   | Name                   | Policies      | Members                                                                            | Metadata |
+--------------------------------------+------------------------+---------------+------------------------------------------------------------------------------------+----------+
| 27abe662-c37e-431c-9715-0d2137fc5519 | instancestogethergroup | [u'affinity'] | [u'b8b72a0a-c981-430e-a909-13d23d928655', u'8affefff-0072-47e3-8d11-2ddf26e48b82'] | {}       |
+--------------------------------------+------------------------+---------------+------------------------------------------------------------------------------------+----------+

Once instances are running, ensure they share the same hypervisor as we specify in the affinity policy.

# nova show affinity1 | grep hypervisor_hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute2az
# nova show affinity2 | grep hypervisor_hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute2az  

Now we create an anti-affinity policy based group.

nova server-group-create farinstancesgroup anti-affinity
+--------------------------------------+-------------------+--------------------+---------+----------+
| Id                                   | Name              | Policies           | Members | Metadata |
+--------------------------------------+-------------------+--------------------+---------+----------+
| 988a9fd2-3a97-481e-b083-fee36b33009d | farinstancesgroup | [u'anti-affinity'] | []      | {}       |
+--------------------------------------+-------------------+--------------------+---------+----------+

Launch two instances and attach them to the anti-affinity group.

nova boot --image a6d7a606-f725-480a-9b1b-7b3ae39b93d4 --flavor m1.tiny --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df --hint group=988a9fd2-3a97-481e-b083-fee36b33009d anti-affinity1
nova boot --image a6d7a606-f725-480a-9b1b-7b3ae39b93d4 --flavor m1.tiny --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df --hint group=988a9fd2-3a97-481e-b083-fee36b33009d anti-affinity2

Ensure the instances are in the anti-affinity group

nova server-group-get 988a9fd2-3a97-481e-b083-fee36b33009d 
+--------------------------------------+-------------------+--------------------+------------------------------------------------------------------------------------+----------+
| Id                                   | Name              | Policies           | Members                                                                            | Metadata |
+--------------------------------------+-------------------+--------------------+------------------------------------------------------------------------------------+----------+
| 988a9fd2-3a97-481e-b083-fee36b33009d | farinstancesgroup | [u'anti-affinity'] | [u'cfb45193-9a7c-436f-ac2d-59a7a9a854ae', u'25dc8671-0c9a-4774-90cf-7394380f91ef'] | {}       |
+--------------------------------------+-------------------+--------------------+------------------------------------------------------------------------------------+----------+

Once instances are running, ensure they are in different hypervisors as we specify in the anti-affinity policy.

# nova show anti-affinity1 | grep hypervisor_hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute2az
# nova show anti-affinity2 | grep hypervisor_hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute1az   

Regards, Eduardo Gonzalez

Migrate from keystone v2.0 to keystone v3 OpenStack Liberty

Migrate from keystone v2.0 to v3 isn’t as easy like just changing the endpoints at the database, every service must be configured to authenticate against keystone v3.

I’ve been working on that the past few days looking for a method, with the purpose of facilitate operators life’s who need this kind of migration.
I have to thank Adam Young work, i followed his blog to make a first configuration idea, after that, i configured all core services to make use of keystone v3.
If you want to check Adam’s blog, follow this link: http://adam.younglogic.com/2015/05/rdo-v3-only/

I used OpenStack Liberty installed with RDO packstack over CentOS 7 servers.
The example IP used is 192.168.200.168, use your own according your needs.
Password used for all services is PASSWD1234, use your own password, you can locate your passwords at the packstack answer file.

Horizon

First we configure Horizon with keystone v3 as below:

vi /etc/openstack-dashboard/local_settings

OPENSTACK_API_VERSIONS = {
    "identity": 3
}

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

keystone

Check your current identity endpoints

mysql  --user keystone_admin --password=PASSWD1234  keystone -e "select interface, url from endpoint where service_id =  (select id from service where service.type = 'identity');"

Change your public, admin and internal endpoints with v3 at the end, instead of v2.0

mysql  --user keystone_admin --password=PASSWD1234   keystone -e "update endpoint set   url  = 'http://192.168.200.178:5000/v3' where  interface ='internal' and  service_id =  (select id from service where service.type = 'identity');"

mysql  --user keystone_admin --password=PASSWD1234   keystone -e "update endpoint set   url  = 'http://192.168.200.178:5000/v3' where  interface ='public' and  service_id =  (select id from service where service.type = 'identity');"

mysql  --user keystone_admin --password=PASSWD1234   keystone -e "update endpoint set   url  = 'http://192.168.200.178:35357/v3' where  interface ='admin' and  service_id =  (select id from service where service.type = 'identity');"

Ensure the endpoints are properly created

mysql  --user keystone_admin --password=KEYSTONE_DB_PW   keystone -e "select interface, url from endpoint where service_id =  (select id from service where service.type = 'identity');"

Create a source file or edit keystonerc_admin with the following data

vi v3_keystone

unset OS_SERVICE_TOKEN
export OS_USERNAME=admin
export OS_PASSWORD=PASSWD1234
export OS_AUTH_URL=http://192.168.200.178:5000/v3
export OS_PROJECT_NAME=admin
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(keystone_admin)]\$ '
export OS_IDENTITY_API_VERSION=3

Comment both pipelines, in public_api and admin_api

vi /usr/share/keystone/keystone-dist-paste.ini

[pipeline:public_api]
# The last item in this pipeline must be public_service or an equivalent
# application. It cannot be a filter.
#pipeline = sizelimit url_normalize request_id build_auth_context token_auth admin_token_auth json_body ec2_extension user_crud_extension public_service

[pipeline:admin_api]
# The last item in this pipeline must be admin_service or an equivalent
# application. It cannot be a filter.
#pipeline = sizelimit url_normalize request_id build_auth_context token_auth admin_token_auth json_body ec2_extension s3_extension crud_extension admin_service

Comment v2.0 entries in composite:main and admin sections.

[composite:main]
use = egg:Paste#urlmap
#/v2.0 = public_api
/v3 = api_v3
/ = public_version_api

[composite:admin]
use = egg:Paste#urlmap
#/v2.0 = admin_api
/v3 = api_v3
/ = admin_version_api

Restart httpd to apply changes

systemctl restart httpd

Check whether keystone and horizon are properly working
The command below should prompt an user list, if not, check configuration in previous steps

openstack user list

Glance

Edit the following files, with the content below:

vi /etc/glance/glance-api.conf 
vi /etc/glance/glance-registry.conf 
vi /etc/glance/glance-cache.conf 

[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = glance
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Comment the following lines:

#auth_host=127.0.0.1
#auth_port=35357
#auth_protocol=http
#identity_uri=http://192.168.200.178:35357
#admin_user=glance
#admin_password=PASSWD1234
#admin_tenant_name=services

Those lines, should be commented in all the other OpenStack core services at keystone_authtoken section

Edit the files below and comment the lines inside keystone_authtoken section.

vi /usr/share/glance/glance-api-dist.conf 
vi /usr/share/glance/glance-registry-dist.conf 

[keystone_authtoken]
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http

Restart glance services

openstack-service restart glance

Ensure glance service is working

openstack image list

Nova

Edit the file below and comment the lines inside keystone_authtoken

vi /usr/share/nova/nova-dist.conf

[keystone_authtoken]
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http

Edit nova.conf and add the auth content inside keystone_authtoken, don’t forget to comment the lines related to the last auth method, which were commented in glance section.

vi /etc/nova/nova.conf

[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = nova
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Configure nova authentication against neutron

[neutron]
          
auth_plugin = password
auth_url = http://192.168.200.178:35357
username = neutron
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Restart nova services to apply changes

openstack-service restart nova

Check if nova works

openstack hypervisor list

Neutron

Comment or remove the following entries at api-paste.ini and add the new version auth lines

vi /etc/neutron/api-paste.ini 

[filter:authtoken]
#identity_uri=http://192.168.200.178:35357
#admin_user=neutron
#admin_password=PASSWD1234
#auth_uri=http://192.168.200.178:5000/v2.0
#admin_tenant_name=services

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = neutron
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Configure v3 authentication for metadata service, remember comment the old auth lines

vi /etc/neutron/metadata_agent.ini

[DEFAULT]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = neutron
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Configure neutron server with v3 auth

vi /etc/neutron/neutron.conf

nova_admin_auth_url = http://192.168.200.178:5000
# nova_admin_tenant_id =1fb93c84c6474c5ea92c0ed5f7d4a6a7
nova_admin_tenant_name = services


[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = neutron
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

#auth_uri = http://192.168.200.178:5000/v2.0
#identity_uri = http://192.168.200.178:35357
#admin_tenant_name = services
#admin_user = neutron
#admin_password = PASSWD1234

Configure neutron auth against nova services

[nova]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = nova
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Restart neutron services to apply changes

openstack-service restart neutron

Test correct neutron funtionality

openstack network list

Cinder

Edit api-paste.ini with the following content

vi /etc/cinder/api-paste.ini 

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_plugin = password
auth_url = http://192.168.200.178:35357
username = cinder
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000
#admin_tenant_name=services
#auth_uri=http://192.168.200.178:5000/v2.0
#admin_user=cinder
#identity_uri=http://192.168.200.178:35357
#admin_password=PASSWD1234

Restart cinder services to apply changes

openstack-service restart cinder

Ensure cinder is properly running

openstack volume create --size 1 testvolume
openstack volume list

Now, you can check if nova is working fine, create an instance and ensure it is in ACTIVE state.

openstack server create --flavor m1.tiny --image cirros --nic net-id=a1aa6336-9ae2-4ffb-99f5-1b6d1130989c testinstance
openstack server list

If any error occurs, review configuration files

Swift

Configure proxy server auth agains keystone v3

vi /etc/swift/proxy-server.conf

[filter:authtoken]
log_name = swift
signing_dir = /var/cache/swift
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_plugin = password
auth_url = http://192.168.200.178:35357
username = swift
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

#auth_uri = http://192.168.200.178:5000/v2.0
#identity_uri = http://192.168.200.178:35357
#admin_tenant_name = services
#admin_user = swift
#admin_password = PASSWD1234
delay_auth_decision = 1
cache = swift.cache
include_service_catalog = False

Restart swift services to apply changes

openstack-service restart swift

Swift commands must be issued with python-openstackclient instead of swiftclient
If done with swiftclient a -V 3 option must be used in order to avoid issues

Check if swift works fine

openstack container create testcontainer

Ceilometer

Configure ceilometer service in order to authenticate agains keystone v3

[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = ceilometer
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

[service_credentials]

os_auth_url = http://controller:5000/v3
os_username = ceilometer
os_tenant_name = services
os_password = PASSWD1234
os_endpoint_type = internalURL
os_region_name = RegionOne

Restart ceilometer services

openstack-service restart ceilometer

Check ceilometer funtionality

ceilometer statistics -m memory

Heat

Configure Heat authentication, since trusts are not stable use password auth method

vi /etc/heat/heat.conf

# Allowed values: password, trusts
#deferred_auth_method = trusts
deferred_auth_method = password

Configure auth_uri and keystone_authtoken section

# From heat.common.config
#
# Unversioned keystone url in format like http://0.0.0.0:5000. (string value)
#auth_uri =
auth_uri = http://192.168.200.178:5000

[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = heat
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

#admin_user=heat
#admin_password=PASSWD1234
#admin_tenant_name=services
#identity_uri=http://192.168.200.178:35357
#auth_uri=http://192.168.200.178:5000/v2.0

Comment or remove heat-dist auth entries in order to avoid conflicts with your config files

vi /usr/share/heat/heat-dist.conf 

[keystone_authtoken]
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http
#auth_uri = http://127.0.0.1:5000/v2.0
#signing_dir = /tmp/keystone-signing-heat

Restart heat services to apply changes

openstack-service restart heat

Ensure heat authentication is properly configured with a simple heat template

heat stack-create --template-file sample.yaml teststack

Most issues occurs in the authentication between nova and neutron services, if instances does not launch as expected, review [nova] and [neutron] sections.

Best regards, Eduardo Gonzalez

1 2 3 4 5 18
%d bloggers like this: