OpenStack Keystone Zero-Downtime upgrade process (N to O)

This blog post will show Keystone upgrade procedure from OpenStack Newton to Ocata release with zero-downtime.

In the case of doing this in production, please read release notes, ensure a proper configuration, do database backups and test the upgrade a thousand times.

Keystone upgrade will need to stop one node in order to use it as upgrade server.
In the case of a PoC this is not an issue, but in a production environment, Keystone loads may be intensive and stopping a node for a while may decrease other nodes performance more than expected.
For this reason I prefer orchestrate the upgrade from an external Docker container. With this method all nodes will be fully running almost all the time.

  • New container won’t start any service, just will sync the database schema with new Keystone version avoiding stop a node to orchestrate the upgrade.
  • The Docker image is provided by OpenStack Kolla project, if already using Kolla this upgrade won’t be needed as kolla-ansible already provide an upgrade method.
  • At the moment of writing of this blog, Ocata packages were not released into stable repositories. For this reason I use DLRN repositories.
  • If Ocata is released please do not use DLRN, use stable packages instead.
  • Use stable Ocata Docker image if available with tag 4.0.x and will avoid repository configuration and package upgrades.
  • NOTE: Upgrade may need more steps depending of your own configuration, i.e, if using fernet token more steps are necessary during the upgrade.
  • All Keystone nodes are behind HAproxy.

 

Prepare the upgrade

Start Keystone Docker container with host networking (needed to communicate with database nodes directly) and root user (needed to install packages).

(host)# docker run -ti --net host -u 0 kolla/centos-binary-keystone:3.0.2 bash

Download Delorean CentOS trunk repositories

(keystone-upgrade)# curl -Lo /etc/yum.repos.d/delorean.repo http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tested/delorean.repo
(keystone-upgrade)# curl -Lo /etc/yum.repos.d/delorean-deps.repo http://trunk.rdoproject.org/centos7/delorean-deps.repo

Disable Newton repository

(keystone-upgrade)# yum-config-manager --disable centos-openstack-newton

Ensure Newton repository is not longer used by the system

(keystone-upgrade)# yum repolist | grep -i openstack
delorean                        delorean-openstack-glance-0bf9d805886c2  565+255

Update all packages in the Docker container to bump keystone version to Ocata.

(keystone-upgrade)# yum clean all && yum update -y

Configure keystone.conf file, this are my settings. Review you configuration and ensure all is correctly, otherwise may cause issues in the database.
An important option is default_domain_id, this value is for backward compatible with users created under default domain.

(keystone-upgrade)# egrep ^[^#] /etc/keystone/keystone.conf 
[DEFAULT]
debug = False
log_file = /var/log/keystone/keystone.log
secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO
[database]
connection = mysql+pymysql://keystone:ickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB@192.168.100.10:3306/keystone
max_retries = -1
[cache]
backend = oslo_cache.memcache_pool
enabled = True
memcache_servers = 192.168.100.215:11211,192.168.100.170:11211
[identity]
default_domain_id = default
[token]
provider = uuid

Check migrate version in the database.
As you will notice, contract/data_migrate/expand are in the same version

(mariadb)# mysql -ukeystone -pickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB -h192.168.100.10 keystone -e "select * from migrate_version;" 
Warning: Using a password on the command line interface can be insecure.
+-----------------------+--------------------------------------------------------------------------+---------+
| repository_id         | repository_path                                                          | version |
+-----------------------+--------------------------------------------------------------------------+---------+
| keystone              | /usr/lib/python2.7/site-packages/keystone/common/sql/migrate_repo        |     109 |
| keystone_contract     | /usr/lib/python2.7/site-packages/keystone/common/sql/contract_repo       |       4 |
| keystone_data_migrate | /usr/lib/python2.7/site-packages/keystone/common/sql/data_migration_repo |       4 |
| keystone_expand       | /usr/lib/python2.7/site-packages/keystone/common/sql/expand_repo         |       4 |
+-----------------------+--------------------------------------------------------------------------+---------+

Before start upgrading the database schema, you will need add SUPER privileges in the database to keystone user or set log_bin_trust_function_creators to True.
In my opinion is safer set the value to True, I don’t want keystone with SUPER privileges.

(mariadb)# mysql -uroot -pnkLMrBibfMTRqOGBAP3UAxdO4kOFfEaPptGM5UDL -h192.168.100.10 keystone -e "set global log_bin_trust_function_creators=1;"

Now use Rally, tempest or some tool to test/benchmarch keystone service during upgrade.
If don’t want to use one of those tools, just use this for command.

(host)# for i in {1000..6000} ; do openstack user create --password $i $i; done

 

Start Upgrade

Check database status before upgrade using Doctor, this may raise issues in the configuration. Some of them may be ignored(Please, ensure is not an issue before ignoring).
As example, I’m not using fernet tokens and errors appear about missing folder.

(keystone-upgrade)# keystone-manage doctor

Remove obsoleted tokens

(keystone-upgrade)# keystone-manage token_flush

Now, expand the database schema to latest version, in keystone.log can see the status.
Check in the logs if some error is raised before jump to the next step.

(keystone-upgrade)# keystone-manage db_sync --expand


2017-01-31 13:42:02.772 306 INFO migrate.versioning.api [-] 4 -> 5... 
2017-01-31 13:42:03.004 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.005 306 INFO migrate.versioning.api [-] 5 -> 6... 
2017-01-31 13:42:03.310 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.310 306 INFO migrate.versioning.api [-] 6 -> 7... 
2017-01-31 13:42:03.670 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.671 306 INFO migrate.versioning.api [-] 7 -> 8... 
2017-01-31 13:42:03.984 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.985 306 INFO migrate.versioning.api [-] 8 -> 9... 
2017-01-31 13:42:04.185 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:04.185 306 INFO migrate.versioning.api [-] 9 -> 10... 
2017-01-31 13:42:07.202 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:07.202 306 INFO migrate.versioning.api [-] 10 -> 11... 
2017-01-31 13:42:07.481 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:07.481 306 INFO migrate.versioning.api [-] 11 -> 12... 
2017-01-31 13:42:11.334 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:11.334 306 INFO migrate.versioning.api [-] 12 -> 13... 
2017-01-31 13:42:11.560 306 INFO migrate.versioning.api [-] done

After expand the database, migrate it to latest version.
Ensure there are not errors in Keystone logs.

(keystone-upgrade)# keystone-manage db_sync --migrate

#keystone.log
2017-01-31 13:42:58.771 314 INFO migrate.versioning.api [-] 4 -> 5... 
2017-01-31 13:42:58.943 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:58.943 314 INFO migrate.versioning.api [-] 5 -> 6... 
2017-01-31 13:42:59.143 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.143 314 INFO migrate.versioning.api [-] 6 -> 7... 
2017-01-31 13:42:59.340 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.341 314 INFO migrate.versioning.api [-] 7 -> 8... 
2017-01-31 13:42:59.698 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.699 314 INFO migrate.versioning.api [-] 8 -> 9... 
2017-01-31 13:42:59.852 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.852 314 INFO migrate.versioning.api [-] 9 -> 10... 
2017-01-31 13:43:00.135 314 INFO migrate.versioning.api [-] done
2017-01-31 13:43:00.135 314 INFO migrate.versioning.api [-] 10 -> 11... 
2017-01-31 13:43:00.545 314 INFO migrate.versioning.api [-] done
2017-01-31 13:43:00.545 314 INFO migrate.versioning.api [-] 11 -> 12... 
2017-01-31 13:43:00.703 314 INFO migrate.versioning.api [-] done
2017-01-31 13:43:00.703 314 INFO migrate.versioning.api [-] 12 -> 13... 
2017-01-31 13:43:00.854 314 INFO migrate.versioning.api [-] done

Now, see migrate_version table, you will notice that expand and data_migrate are in the latest version, but contract still in the previous version.

(mariadb)# mysql -ukeystone -pickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB -h192.168.100.10 keystone -e "select * from migrate_version;"
+-----------------------+--------------------------------------------------------------------------+---------+
| repository_id         | repository_path                                                          | version |
+-----------------------+--------------------------------------------------------------------------+---------+
| keystone              | /usr/lib/python2.7/site-packages/keystone/common/sql/migrate_repo        |     109 |
| keystone_contract     | /usr/lib/python2.7/site-packages/keystone/common/sql/contract_repo       |       4 |
| keystone_data_migrate | /usr/lib/python2.7/site-packages/keystone/common/sql/data_migration_repo |      13 |
| keystone_expand       | /usr/lib/python2.7/site-packages/keystone/common/sql/expand_repo         |      13 |
+-----------------------+--------------------------------------------------------------------------+---------+

 

Every Keystone node, one by one

Go to keystone nodes.
Stop Keystone services, in my case using wsgi inside Apache

(keystone_nodes)# systemctl stop httpd

Configure Ocata repositories as made in the Docker container.
Update packages, if you have Keystone sharing the node with other OpenStack service, do not update all packages as it will break other services.
Update only required packages.

(keystone_nodes)# yum clean all && yum update -y

Configure Keystone configuration file to the desired state. Your configuration may change.

(keystone_nodes)# egrep ^[^#] /etc/keystone/keystone.conf 
[DEFAULT]
debug = False
log_file = /var/log/keystone/keystone.log
secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO
[database]
connection = mysql+pymysql://keystone:ickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB@192.168.100.10:3306/keystone
max_retries = -1
[cache]
backend = oslo_cache.memcache_pool
enabled = True
memcache_servers = 192.168.100.215:11211,192.168.100.170:11211
[identity]
default_domain_id = default
[token]
provider = uuid

Start Keystone service.

(keystone_nodes)# systemctl start httpd

 

Finish Upgrade

After all the nodes are updated to latest version (please ensure all nodes are using latest packages, if not will fail).
Contract Keystone database schema.
Look at keystone.log for errors.

(keystone-upgrade)# keystone-manage db_sync --contract

keystone.log

2017-01-31 13:57:52.164 322 INFO migrate.versioning.api [-] 4 -> 5... 
2017-01-31 13:57:52.379 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:52.379 322 INFO migrate.versioning.api [-] 5 -> 6... 
2017-01-31 13:57:52.969 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:52.969 322 INFO migrate.versioning.api [-] 6 -> 7... 
2017-01-31 13:57:53.462 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:53.462 322 INFO migrate.versioning.api [-] 7 -> 8... 
2017-01-31 13:57:53.793 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:53.793 322 INFO migrate.versioning.api [-] 8 -> 9... 
2017-01-31 13:57:53.957 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:53.957 322 INFO migrate.versioning.api [-] 9 -> 10... 
2017-01-31 13:57:54.111 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:54.112 322 INFO migrate.versioning.api [-] 10 -> 11... 
2017-01-31 13:57:54.853 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:54.853 322 INFO migrate.versioning.api [-] 11 -> 12... 
2017-01-31 13:57:56.727 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:56.728 322 INFO migrate.versioning.api [-] 12 -> 13... 
2017-01-31 13:57:59.529 322 INFO migrate.versioning.api [-] done

Now if we look at migrate_version table, will see that contract version is latest and match with the other version (Ensure all are in the same version).
This means the database upgrade has been successfully implemented.

(mariadb)# mysql -ukeystone -pickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB -h192.168.100.10 keystone -e "select * from migrate_version;"
+-----------------------+--------------------------------------------------------------------------+---------+
| repository_id         | repository_path                                                          | version |
+-----------------------+--------------------------------------------------------------------------+---------+
| keystone              | /usr/lib/python2.7/site-packages/keystone/common/sql/migrate_repo        |     109 |
| keystone_contract     | /usr/lib/python2.7/site-packages/keystone/common/sql/contract_repo       |      13 |
| keystone_data_migrate | /usr/lib/python2.7/site-packages/keystone/common/sql/data_migration_repo |      13 |
| keystone_expand       | /usr/lib/python2.7/site-packages/keystone/common/sql/expand_repo         |      13 |
+-----------------------+--------------------------------------------------------------------------+---------+

Remove log_bin_trust_function_creators value.

(mariadb)# mysql -uroot -pnkLMrBibfMTRqOGBAP3UAxdO4kOFfEaPptGM5UDL -h192.168.100.10 keystone -e "set global log_bin_trust_function_creators=0;"

After finish the upgrade, Rally tests should not have any error. **If using HAproxy for load balance Keystone service, some errors may happen due a connection drop while stopping Keystone service and re-balance to other Keystone node. This can be avoided putting the node to update in Maintenance Mode in HAproxy backend.

Have to thank Keystone team in #openstack-keystone IRC channel for the help provided with a couple of issues.

Regards, Eduardo Gonzalez

Migrate from keystone v2.0 to keystone v3 OpenStack Liberty

Migrate from keystone v2.0 to v3 isn’t as easy like just changing the endpoints at the database, every service must be configured to authenticate against keystone v3.

I’ve been working on that the past few days looking for a method, with the purpose of facilitate operators life’s who need this kind of migration.
I have to thank Adam Young work, i followed his blog to make a first configuration idea, after that, i configured all core services to make use of keystone v3.
If you want to check Adam’s blog, follow this link: http://adam.younglogic.com/2015/05/rdo-v3-only/

I used OpenStack Liberty installed with RDO packstack over CentOS 7 servers.
The example IP used is 192.168.200.168, use your own according your needs.
Password used for all services is PASSWD1234, use your own password, you can locate your passwords at the packstack answer file.

Horizon

First we configure Horizon with keystone v3 as below:

vi /etc/openstack-dashboard/local_settings

OPENSTACK_API_VERSIONS = {
    "identity": 3
}

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

keystone

Check your current identity endpoints

mysql  --user keystone_admin --password=PASSWD1234  keystone -e "select interface, url from endpoint where service_id =  (select id from service where service.type = 'identity');"

Change your public, admin and internal endpoints with v3 at the end, instead of v2.0

mysql  --user keystone_admin --password=PASSWD1234   keystone -e "update endpoint set   url  = 'http://192.168.200.178:5000/v3' where  interface ='internal' and  service_id =  (select id from service where service.type = 'identity');"

mysql  --user keystone_admin --password=PASSWD1234   keystone -e "update endpoint set   url  = 'http://192.168.200.178:5000/v3' where  interface ='public' and  service_id =  (select id from service where service.type = 'identity');"

mysql  --user keystone_admin --password=PASSWD1234   keystone -e "update endpoint set   url  = 'http://192.168.200.178:35357/v3' where  interface ='admin' and  service_id =  (select id from service where service.type = 'identity');"

Ensure the endpoints are properly created

mysql  --user keystone_admin --password=KEYSTONE_DB_PW   keystone -e "select interface, url from endpoint where service_id =  (select id from service where service.type = 'identity');"

Create a source file or edit keystonerc_admin with the following data

vi v3_keystone

unset OS_SERVICE_TOKEN
export OS_USERNAME=admin
export OS_PASSWORD=PASSWD1234
export OS_AUTH_URL=http://192.168.200.178:5000/v3
export OS_PROJECT_NAME=admin
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(keystone_admin)]\$ '
export OS_IDENTITY_API_VERSION=3

Comment both pipelines, in public_api and admin_api

vi /usr/share/keystone/keystone-dist-paste.ini

[pipeline:public_api]
# The last item in this pipeline must be public_service or an equivalent
# application. It cannot be a filter.
#pipeline = sizelimit url_normalize request_id build_auth_context token_auth admin_token_auth json_body ec2_extension user_crud_extension public_service

[pipeline:admin_api]
# The last item in this pipeline must be admin_service or an equivalent
# application. It cannot be a filter.
#pipeline = sizelimit url_normalize request_id build_auth_context token_auth admin_token_auth json_body ec2_extension s3_extension crud_extension admin_service

Comment v2.0 entries in composite:main and admin sections.

[composite:main]
use = egg:Paste#urlmap
#/v2.0 = public_api
/v3 = api_v3
/ = public_version_api

[composite:admin]
use = egg:Paste#urlmap
#/v2.0 = admin_api
/v3 = api_v3
/ = admin_version_api

Restart httpd to apply changes

systemctl restart httpd

Check whether keystone and horizon are properly working
The command below should prompt an user list, if not, check configuration in previous steps

openstack user list

Glance

Edit the following files, with the content below:

vi /etc/glance/glance-api.conf 
vi /etc/glance/glance-registry.conf 
vi /etc/glance/glance-cache.conf 

[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = glance
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Comment the following lines:

#auth_host=127.0.0.1
#auth_port=35357
#auth_protocol=http
#identity_uri=http://192.168.200.178:35357
#admin_user=glance
#admin_password=PASSWD1234
#admin_tenant_name=services

Those lines, should be commented in all the other OpenStack core services at keystone_authtoken section

Edit the files below and comment the lines inside keystone_authtoken section.

vi /usr/share/glance/glance-api-dist.conf 
vi /usr/share/glance/glance-registry-dist.conf 

[keystone_authtoken]
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http

Restart glance services

openstack-service restart glance

Ensure glance service is working

openstack image list

Nova

Edit the file below and comment the lines inside keystone_authtoken

vi /usr/share/nova/nova-dist.conf

[keystone_authtoken]
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http

Edit nova.conf and add the auth content inside keystone_authtoken, don’t forget to comment the lines related to the last auth method, which were commented in glance section.

vi /etc/nova/nova.conf

[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = nova
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Configure nova authentication against neutron

[neutron]
          
auth_plugin = password
auth_url = http://192.168.200.178:35357
username = neutron
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Restart nova services to apply changes

openstack-service restart nova

Check if nova works

openstack hypervisor list

Neutron

Comment or remove the following entries at api-paste.ini and add the new version auth lines

vi /etc/neutron/api-paste.ini 

[filter:authtoken]
#identity_uri=http://192.168.200.178:35357
#admin_user=neutron
#admin_password=PASSWD1234
#auth_uri=http://192.168.200.178:5000/v2.0
#admin_tenant_name=services

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = neutron
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Configure v3 authentication for metadata service, remember comment the old auth lines

vi /etc/neutron/metadata_agent.ini

[DEFAULT]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = neutron
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Configure neutron server with v3 auth

vi /etc/neutron/neutron.conf

nova_admin_auth_url = http://192.168.200.178:5000
# nova_admin_tenant_id =1fb93c84c6474c5ea92c0ed5f7d4a6a7
nova_admin_tenant_name = services


[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = neutron
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

#auth_uri = http://192.168.200.178:5000/v2.0
#identity_uri = http://192.168.200.178:35357
#admin_tenant_name = services
#admin_user = neutron
#admin_password = PASSWD1234

Configure neutron auth against nova services

[nova]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = nova
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Restart neutron services to apply changes

openstack-service restart neutron

Test correct neutron funtionality

openstack network list

Cinder

Edit api-paste.ini with the following content

vi /etc/cinder/api-paste.ini 

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_plugin = password
auth_url = http://192.168.200.178:35357
username = cinder
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000
#admin_tenant_name=services
#auth_uri=http://192.168.200.178:5000/v2.0
#admin_user=cinder
#identity_uri=http://192.168.200.178:35357
#admin_password=PASSWD1234

Restart cinder services to apply changes

openstack-service restart cinder

Ensure cinder is properly running

openstack volume create --size 1 testvolume
openstack volume list

Now, you can check if nova is working fine, create an instance and ensure it is in ACTIVE state.

openstack server create --flavor m1.tiny --image cirros --nic net-id=a1aa6336-9ae2-4ffb-99f5-1b6d1130989c testinstance
openstack server list

If any error occurs, review configuration files

Swift

Configure proxy server auth agains keystone v3

vi /etc/swift/proxy-server.conf

[filter:authtoken]
log_name = swift
signing_dir = /var/cache/swift
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_plugin = password
auth_url = http://192.168.200.178:35357
username = swift
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

#auth_uri = http://192.168.200.178:5000/v2.0
#identity_uri = http://192.168.200.178:35357
#admin_tenant_name = services
#admin_user = swift
#admin_password = PASSWD1234
delay_auth_decision = 1
cache = swift.cache
include_service_catalog = False

Restart swift services to apply changes

openstack-service restart swift

Swift commands must be issued with python-openstackclient instead of swiftclient
If done with swiftclient a -V 3 option must be used in order to avoid issues

Check if swift works fine

openstack container create testcontainer

Ceilometer

Configure ceilometer service in order to authenticate agains keystone v3

[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = ceilometer
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

[service_credentials]

os_auth_url = http://controller:5000/v3
os_username = ceilometer
os_tenant_name = services
os_password = PASSWD1234
os_endpoint_type = internalURL
os_region_name = RegionOne

Restart ceilometer services

openstack-service restart ceilometer

Check ceilometer funtionality

ceilometer statistics -m memory

Heat

Configure Heat authentication, since trusts are not stable use password auth method

vi /etc/heat/heat.conf

# Allowed values: password, trusts
#deferred_auth_method = trusts
deferred_auth_method = password

Configure auth_uri and keystone_authtoken section

# From heat.common.config
#
# Unversioned keystone url in format like http://0.0.0.0:5000. (string value)
#auth_uri =
auth_uri = http://192.168.200.178:5000

[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = heat
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

#admin_user=heat
#admin_password=PASSWD1234
#admin_tenant_name=services
#identity_uri=http://192.168.200.178:35357
#auth_uri=http://192.168.200.178:5000/v2.0

Comment or remove heat-dist auth entries in order to avoid conflicts with your config files

vi /usr/share/heat/heat-dist.conf 

[keystone_authtoken]
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http
#auth_uri = http://127.0.0.1:5000/v2.0
#signing_dir = /tmp/keystone-signing-heat

Restart heat services to apply changes

openstack-service restart heat

Ensure heat authentication is properly configured with a simple heat template

heat stack-create --template-file sample.yaml teststack

Most issues occurs in the authentication between nova and neutron services, if instances does not launch as expected, review [nova] and [neutron] sections.

Best regards, Eduardo Gonzalez

%d bloggers like this: