OpenStack Keystone Zero-Downtime upgrade process (N to O)

This blog post will show Keystone upgrade procedure from OpenStack Newton to Ocata release with zero-downtime.

In the case of doing this in production, please read release notes, ensure a proper configuration, do database backups and test the upgrade a thousand times.

Keystone upgrade will need to stop one node in order to use it as upgrade server.
In the case of a PoC this is not an issue, but in a production environment, Keystone loads may be intensive and stopping a node for a while may decrease other nodes performance more than expected.
For this reason I prefer orchestrate the upgrade from an external Docker container. With this method all nodes will be fully running almost all the time.

  • New container won’t start any service, just will sync the database schema with new Keystone version avoiding stop a node to orchestrate the upgrade.
  • The Docker image is provided by OpenStack Kolla project, if already using Kolla this upgrade won’t be needed as kolla-ansible already provide an upgrade method.
  • At the moment of writing of this blog, Ocata packages were not released into stable repositories. For this reason I use DLRN repositories.
  • If Ocata is released please do not use DLRN, use stable packages instead.
  • Use stable Ocata Docker image if available with tag 4.0.x and will avoid repository configuration and package upgrades.
  • NOTE: Upgrade may need more steps depending of your own configuration, i.e, if using fernet token more steps are necessary during the upgrade.
  • All Keystone nodes are behind HAproxy.

 

Prepare the upgrade

Start Keystone Docker container with host networking (needed to communicate with database nodes directly) and root user (needed to install packages).

(host)# docker run -ti --net host -u 0 kolla/centos-binary-keystone:3.0.2 bash

Download Delorean CentOS trunk repositories

(keystone-upgrade)# curl -Lo /etc/yum.repos.d/delorean.repo http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tested/delorean.repo
(keystone-upgrade)# curl -Lo /etc/yum.repos.d/delorean-deps.repo http://trunk.rdoproject.org/centos7/delorean-deps.repo

Disable Newton repository

(keystone-upgrade)# yum-config-manager --disable centos-openstack-newton

Ensure Newton repository is not longer used by the system

(keystone-upgrade)# yum repolist | grep -i openstack
delorean                        delorean-openstack-glance-0bf9d805886c2  565+255

Update all packages in the Docker container to bump keystone version to Ocata.

(keystone-upgrade)# yum clean all && yum update -y

Configure keystone.conf file, this are my settings. Review you configuration and ensure all is correctly, otherwise may cause issues in the database.
An important option is default_domain_id, this value is for backward compatible with users created under default domain.

(keystone-upgrade)# egrep ^[^#] /etc/keystone/keystone.conf 
[DEFAULT]
debug = False
log_file = /var/log/keystone/keystone.log
secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO
[database]
connection = mysql+pymysql://keystone:ickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB@192.168.100.10:3306/keystone
max_retries = -1
[cache]
backend = oslo_cache.memcache_pool
enabled = True
memcache_servers = 192.168.100.215:11211,192.168.100.170:11211
[identity]
default_domain_id = default
[token]
provider = uuid

Check migrate version in the database.
As you will notice, contract/data_migrate/expand are in the same version

(mariadb)# mysql -ukeystone -pickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB -h192.168.100.10 keystone -e "select * from migrate_version;" 
Warning: Using a password on the command line interface can be insecure.
+-----------------------+--------------------------------------------------------------------------+---------+
| repository_id         | repository_path                                                          | version |
+-----------------------+--------------------------------------------------------------------------+---------+
| keystone              | /usr/lib/python2.7/site-packages/keystone/common/sql/migrate_repo        |     109 |
| keystone_contract     | /usr/lib/python2.7/site-packages/keystone/common/sql/contract_repo       |       4 |
| keystone_data_migrate | /usr/lib/python2.7/site-packages/keystone/common/sql/data_migration_repo |       4 |
| keystone_expand       | /usr/lib/python2.7/site-packages/keystone/common/sql/expand_repo         |       4 |
+-----------------------+--------------------------------------------------------------------------+---------+

Before start upgrading the database schema, you will need add SUPER privileges in the database to keystone user or set log_bin_trust_function_creators to True.
In my opinion is safer set the value to True, I don’t want keystone with SUPER privileges.

(mariadb)# mysql -uroot -pnkLMrBibfMTRqOGBAP3UAxdO4kOFfEaPptGM5UDL -h192.168.100.10 keystone -e "set global log_bin_trust_function_creators=1;"

Now use Rally, tempest or some tool to test/benchmarch keystone service during upgrade.
If don’t want to use one of those tools, just use this for command.

(host)# for i in {1000..6000} ; do openstack user create --password $i $i; done

 

Start Upgrade

Check database status before upgrade using Doctor, this may raise issues in the configuration. Some of them may be ignored(Please, ensure is not an issue before ignoring).
As example, I’m not using fernet tokens and errors appear about missing folder.

(keystone-upgrade)# keystone-manage doctor

Remove obsoleted tokens

(keystone-upgrade)# keystone-manage token_flush

Now, expand the database schema to latest version, in keystone.log can see the status.
Check in the logs if some error is raised before jump to the next step.

(keystone-upgrade)# keystone-manage db_sync --expand


2017-01-31 13:42:02.772 306 INFO migrate.versioning.api [-] 4 -> 5... 
2017-01-31 13:42:03.004 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.005 306 INFO migrate.versioning.api [-] 5 -> 6... 
2017-01-31 13:42:03.310 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.310 306 INFO migrate.versioning.api [-] 6 -> 7... 
2017-01-31 13:42:03.670 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.671 306 INFO migrate.versioning.api [-] 7 -> 8... 
2017-01-31 13:42:03.984 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.985 306 INFO migrate.versioning.api [-] 8 -> 9... 
2017-01-31 13:42:04.185 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:04.185 306 INFO migrate.versioning.api [-] 9 -> 10... 
2017-01-31 13:42:07.202 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:07.202 306 INFO migrate.versioning.api [-] 10 -> 11... 
2017-01-31 13:42:07.481 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:07.481 306 INFO migrate.versioning.api [-] 11 -> 12... 
2017-01-31 13:42:11.334 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:11.334 306 INFO migrate.versioning.api [-] 12 -> 13... 
2017-01-31 13:42:11.560 306 INFO migrate.versioning.api [-] done

After expand the database, migrate it to latest version.
Ensure there are not errors in Keystone logs.

(keystone-upgrade)# keystone-manage db_sync --migrate

#keystone.log
2017-01-31 13:42:58.771 314 INFO migrate.versioning.api [-] 4 -> 5... 
2017-01-31 13:42:58.943 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:58.943 314 INFO migrate.versioning.api [-] 5 -> 6... 
2017-01-31 13:42:59.143 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.143 314 INFO migrate.versioning.api [-] 6 -> 7... 
2017-01-31 13:42:59.340 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.341 314 INFO migrate.versioning.api [-] 7 -> 8... 
2017-01-31 13:42:59.698 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.699 314 INFO migrate.versioning.api [-] 8 -> 9... 
2017-01-31 13:42:59.852 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.852 314 INFO migrate.versioning.api [-] 9 -> 10... 
2017-01-31 13:43:00.135 314 INFO migrate.versioning.api [-] done
2017-01-31 13:43:00.135 314 INFO migrate.versioning.api [-] 10 -> 11... 
2017-01-31 13:43:00.545 314 INFO migrate.versioning.api [-] done
2017-01-31 13:43:00.545 314 INFO migrate.versioning.api [-] 11 -> 12... 
2017-01-31 13:43:00.703 314 INFO migrate.versioning.api [-] done
2017-01-31 13:43:00.703 314 INFO migrate.versioning.api [-] 12 -> 13... 
2017-01-31 13:43:00.854 314 INFO migrate.versioning.api [-] done

Now, see migrate_version table, you will notice that expand and data_migrate are in the latest version, but contract still in the previous version.

(mariadb)# mysql -ukeystone -pickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB -h192.168.100.10 keystone -e "select * from migrate_version;"
+-----------------------+--------------------------------------------------------------------------+---------+
| repository_id         | repository_path                                                          | version |
+-----------------------+--------------------------------------------------------------------------+---------+
| keystone              | /usr/lib/python2.7/site-packages/keystone/common/sql/migrate_repo        |     109 |
| keystone_contract     | /usr/lib/python2.7/site-packages/keystone/common/sql/contract_repo       |       4 |
| keystone_data_migrate | /usr/lib/python2.7/site-packages/keystone/common/sql/data_migration_repo |      13 |
| keystone_expand       | /usr/lib/python2.7/site-packages/keystone/common/sql/expand_repo         |      13 |
+-----------------------+--------------------------------------------------------------------------+---------+

 

Every Keystone node, one by one

Go to keystone nodes.
Stop Keystone services, in my case using wsgi inside Apache

(keystone_nodes)# systemctl stop httpd

Configure Ocata repositories as made in the Docker container.
Update packages, if you have Keystone sharing the node with other OpenStack service, do not update all packages as it will break other services.
Update only required packages.

(keystone_nodes)# yum clean all && yum update -y

Configure Keystone configuration file to the desired state. Your configuration may change.

(keystone_nodes)# egrep ^[^#] /etc/keystone/keystone.conf 
[DEFAULT]
debug = False
log_file = /var/log/keystone/keystone.log
secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO
[database]
connection = mysql+pymysql://keystone:ickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB@192.168.100.10:3306/keystone
max_retries = -1
[cache]
backend = oslo_cache.memcache_pool
enabled = True
memcache_servers = 192.168.100.215:11211,192.168.100.170:11211
[identity]
default_domain_id = default
[token]
provider = uuid

Start Keystone service.

(keystone_nodes)# systemctl start httpd

 

Finish Upgrade

After all the nodes are updated to latest version (please ensure all nodes are using latest packages, if not will fail).
Contract Keystone database schema.
Look at keystone.log for errors.

(keystone-upgrade)# keystone-manage db_sync --contract

keystone.log

2017-01-31 13:57:52.164 322 INFO migrate.versioning.api [-] 4 -> 5... 
2017-01-31 13:57:52.379 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:52.379 322 INFO migrate.versioning.api [-] 5 -> 6... 
2017-01-31 13:57:52.969 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:52.969 322 INFO migrate.versioning.api [-] 6 -> 7... 
2017-01-31 13:57:53.462 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:53.462 322 INFO migrate.versioning.api [-] 7 -> 8... 
2017-01-31 13:57:53.793 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:53.793 322 INFO migrate.versioning.api [-] 8 -> 9... 
2017-01-31 13:57:53.957 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:53.957 322 INFO migrate.versioning.api [-] 9 -> 10... 
2017-01-31 13:57:54.111 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:54.112 322 INFO migrate.versioning.api [-] 10 -> 11... 
2017-01-31 13:57:54.853 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:54.853 322 INFO migrate.versioning.api [-] 11 -> 12... 
2017-01-31 13:57:56.727 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:56.728 322 INFO migrate.versioning.api [-] 12 -> 13... 
2017-01-31 13:57:59.529 322 INFO migrate.versioning.api [-] done

Now if we look at migrate_version table, will see that contract version is latest and match with the other version (Ensure all are in the same version).
This means the database upgrade has been successfully implemented.

(mariadb)# mysql -ukeystone -pickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB -h192.168.100.10 keystone -e "select * from migrate_version;"
+-----------------------+--------------------------------------------------------------------------+---------+
| repository_id         | repository_path                                                          | version |
+-----------------------+--------------------------------------------------------------------------+---------+
| keystone              | /usr/lib/python2.7/site-packages/keystone/common/sql/migrate_repo        |     109 |
| keystone_contract     | /usr/lib/python2.7/site-packages/keystone/common/sql/contract_repo       |      13 |
| keystone_data_migrate | /usr/lib/python2.7/site-packages/keystone/common/sql/data_migration_repo |      13 |
| keystone_expand       | /usr/lib/python2.7/site-packages/keystone/common/sql/expand_repo         |      13 |
+-----------------------+--------------------------------------------------------------------------+---------+

Remove log_bin_trust_function_creators value.

(mariadb)# mysql -uroot -pnkLMrBibfMTRqOGBAP3UAxdO4kOFfEaPptGM5UDL -h192.168.100.10 keystone -e "set global log_bin_trust_function_creators=0;"

After finish the upgrade, Rally tests should not have any error. **If using HAproxy for load balance Keystone service, some errors may happen due a connection drop while stopping Keystone service and re-balance to other Keystone node. This can be avoided putting the node to update in Maintenance Mode in HAproxy backend.

Have to thank Keystone team in #openstack-keystone IRC channel for the help provided with a couple of issues.

Regards, Eduardo Gonzalez

Spacewalk (Red Hat Satellite v5) in a Docker container PoC

Spacewalk was the upstream project to provide a Linux systems management layer on which Red Hat Satellite was based, was based at least until RH Satellite version 5. Newer versions are not anymore based on Spacewalk, instead Satellite is a federation of several upstream open source projects, including Katello, Foreman, Pulp, and Candlepin.

Some weeks ago, a friend asked me if I knew a Docker container image for Satellite.
I have not found any image. What I found was some Spacewalk images, but sadly none of them worked for me.
I decided to create an image for this purpose.

While developing the image, I found serious troubles to make it run with systemd (I’m a fan of systemd, but not inside containers yet).
The result was a semi functional working image. I said semi functional because some Spacewalk features are not working (probably an issue with systemd again).
The main problem was that spacewalk-setup script starts and uses systemd to configure the database and the other needed services, that’s OK in a VM but not in a container.
So i needed to hack into postgres setup and start the services with the typical command --config-file file.conf executed from supervisord as Docker entrypoint.
Currently there is an issue with osa-dispatcher, on which I can’t find a fix to make it run.

This image is primarily created just for test Spacewalk interface and be more comfortable with it aka testing/development purposes, or just to have fun hacking with Docker containers.

Now, I’m going to make a short description of what the Dockerfile makes and then start the container.
Have fun.

I used centos as image base for this PoC

FROM centos:7 

Typical Maintainer line

MAINTAINER Eduardo Gonzalez Gutierrez 

Add jpackage repo which provides Java packages for Linux

COPY jpackage-generic.repo /etc/yum.repos.d/jpackage-generic.repo

Install EPEL and Spacewalk repositories, after install, clean all stored cache to minimize image size

RUN yum install -y http://yum.spacewalkproject.org/2.5/RHEL/7/x86_64/spacewalk-repo-2.5-3.el7.noarch.rpm \
        epel-release && \
        yum clean all

Import Keys to allow installation from these repositories

RUN rpm --import http://www.jpackage.org/jpackage.asc && \
    rpm --import https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7 && \
    rpm --import http://yum.spacewalkproject.org/RPM-GPG-KEY-spacewalk-2015 && \
    yum clean all

Install spacewalk and supervisord packages

RUN yum -y install \
        spacewalk-setup-postgresql \
        spacewalk-postgresql \
        supervisor  \
        yum clean all

Copy the example file used to sync spacewalk database in a later step

COPY answerfile.txt /tmp/answerfile.txt

Open necessary ports

EXPOSE 80 443 5222 68 69

Change to postgres user

USER postgres

Initialize the database

RUN /usr/bin/pg_ctl initdb  -D /var/lib/pgsql/data/

Create spacewalk database, user, role and create pltclu language

RUN /usr/bin/pg_ctl start -D /var/lib/pgsql/data/  -w -t 300 && \
     psql -c 'CREATE DATABASE spaceschema' && \
     psql -c "CREATE USER spaceuser WITH PASSWORD 'spacepw'" && \
     psql -c 'ALTER ROLE spaceuser SUPERUSER' && \
     createlang pltclu spaceschema

Change to root user

USER root

Start the database and execute spacewalk configuration script

RUN su -c "/usr/bin/pg_ctl start -D /var/lib/pgsql/data/  -w -t 300" postgres && \
    su -c "spacewalk-setup --answer-file=/tmp/answerfile.txt --skip-db-diskspace-check --skip-db-install" root ; exit 0

Copy supervisord configuration

ADD supervisord.conf /etc/supervisord.d/supervisord.conf

Use supervisord command to start all services at container launch time

ENTRYPOINT supervisord -c /etc/supervisord.d/supervisord.conf

You can check or download the source code at GitHub https://github.com/egonzalez90/docker-spacewalk

I uploaded the image to DockerHub, which is auto-build from my GitHub repository, you can find it with the following command.

[egonzalez@localhost ~]$ docker search spacewalk
INDEX       NAME                                       DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
docker.io   docker.io/ruo91/spacewalk                  Spacewalk is an open source Linux systems ...   3                    [OK]
docker.io   docker.io/jamesnetherton/spacewalk         Spacewalk running under Docker                  1                    
docker.io   docker.io/coffmant/spacewalk-docker        Spacewalk                                       0                    [OK]
docker.io   docker.io/csabakollar/spacewalk            Spacewalk 2.4 in a CentOS6 container            0                    
docker.io   docker.io/egonzalez90/spacewalk            Spacewalk docker image                          0                    [OK]
docker.io   docker.io/jdostal/spacewalk-clients        Repository containing spacewalk-clients         0                    
docker.io   docker.io/jhutar/spacewalk-client                                                          0                    
docker.io   docker.io/norus/spacewalk-reposync                                                         0                    
docker.io   docker.io/pajinek/spacewalk-client                                                         0                    [OK]
docker.io   docker.io/perfectweb/spacewalk             spacewalk                                       0                    [OK]
docker.io   docker.io/researchiteng/docker-spacewalk   spacewalk is the open source version of Re...   0                    [OK]
docker.io   docker.io/varhoo/spacewalk-proxy                                                           0                    [OK]

To start the container use the following command. If you don’t have the image locally, it will download the image from DockerHub

[egonzalez@localhost ~]$ docker run -d --privileged=True egonzalez90/spacewalk
Unable to find image 'egonzalez90/spacewalk:latest' locally
Trying to pull repository docker.io/egonzalez90/spacewalk ... 
latest: Pulling from docker.io/egonzalez90/spacewalk
a3ed95caeb02: Already exists 
da71393503ec: Already exists 
519093688e2c: Pull complete 
97bbffaa9fc9: Pull complete 
63bfb115f62d: Pull complete 
929bbb68aff9: Pull complete 
532bc4af8e1a: Pull complete 
3eb667dda9ee: Pull complete 
275894897aa4: Pull complete 
93bcddf9cedb: Pull complete 
266c3b70754f: Pull complete 
Digest: sha256:a4dd98548f9dbb405fb4c6bb4a2a07b83d5f2bf730f29f71913b72876b1a61ab
Status: Downloaded newer image for docker.io/egonzalez90/spacewalk:latest
ded4a8b7eb1ee61fecc8ddc2eb1b092917a361bc36f7f752b32d76e79501d70a

Now you have the container running, check if all the ports are properly exposed

[egonzalez@localhost ~]$ docker ps --latest --format 'table {{.ID}}\t{{.Image}}\t{{.Ports}}'
CONTAINER ID        IMAGE                   PORTS
ded4a8b7eb1e        egonzalez90/spacewalk   68-69/tcp, 80/tcp, 443/tcp, 5222/tcp

Get the container IP address in order to enter from a Web Browser

[egonzalez@localhost ~]$ docker inspect ded4a8b7eb1e | egrep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "172.17.0.3",
                    "IPAddress": "172.17.0.3",

Open A browser and go to the container IP address, if you use HTTP, by default it will redirect you to HTTPS.
The container uses an auto-signed SSL certificate, you have to add an exception in the Browser you use to allow connections to Spacewalk.
Once in the Welcome page, create an Organization.

Now you are in Spacewalk and can play/test some features.
Selection_003

There is an issue I was not able to fix, so osa-dispatcher and some other features will not work with this image.
If someone can give me an input to fix the issue it will appreciated.

[egonzalez@localhost ~]$ docker logs ded4a8b7eb1e | egrep FATAL
2016-07-12 18:13:32,220 INFO gave up: osa-dispatcher entered FATAL state, too many start retries too quickly

Thanks for your time and hopes this image at least serves you to learn and play with the interface.

Regards, Eduardo Gonzalez

Rally OpenStack benchmarking from Docker containers

OpenStack Rally is a project under the Big Tent umbrella with the mission of verify OpenStack environments to ensure SLAs under high loads or fail over scenarios, and cloud services verification. Rally can also be used to continuous integration and delivery tasks.

Why use Rally inside a Docker container? Rally is a service that is not commonly used in most environments, is a tool that is used when new infrastructure changes are made or when a SLAs review must be done, not make any sense have a service consuming infrastructure resources or block a server only for use under specific situations. Also, if your OpenStack infrastructure is automated, with a container you can have a nice integration with CI/CD tools like Jenkins.



Main reasons to use Rally inside Docker containers:

  • Quick tests/deployments of Rally tasks
  • Automated testing
  • Cost savings
  • Operators can execute tasks with their own computers, freeing infrastructure resources
  • Re-utilization of resources


Here you got my suggestions about how to use Rally inside Docker:

  • Create a new container(automatized or not by another tool)
  • Always use an external volume to store rally reports data
  • Execute Rally tasks
  • Export the reports to the volume shared with the Docker host
  • Kill the container


Let’s start with this quick guide:



Clone the repo i created with the Dockerfile

[egonzalez@localhost ~]$ git clone https://github.com/egonzalez90/docker-rally.git

Move to docker-rally directory

[egonzalez@localhost ~]$ cd docker-rally/

Create the Docker image

[egonzalez@localhost docker-rally]$ docker build -t egonzalez90/rally-mitaka .
Sending build context to Docker daemon  76.8 kB
Step 1 : FROM centos:7
 ---> 904d6c400333
Step 2 : MAINTAINER Eduardo Gonzalez Gutierrez <dabarren@gmail.com>
 ---> Using cache
 ---> ee93bc7747e1
Step 3 : RUN yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-mitaka/rdo-release-mitaka-3.noarch.rpm
 ---> Using cache
 ---> 8492ab9ee261
Step 4 : RUN yum update -y
 ---> Using cache
 ---> 1374340eb39a
Step 5 : RUN yum -y install         openstack-rally         gcc         libffi-devel         python-devel         openssl-devel         gmp-devel         libxml2-devel         libxslt-devel         postgresql-devel         redhat-rpm-config         wget         openstack-selinux         openstack-utils &&         yum clean all
 ---> Using cache
 ---> 9b65e4a281be
Step 6 : RUN rally-manage --config-file /etc/rally/rally.conf db recreate
 ---> Using cache
 ---> dc4f3dbc1505
Successfully built dc4f3dbc1505

Start rally container with a pseudo-tty and a volume to store rally execution data

[egonzalez@localhost docker-rally]$ docker run -ti -v /opt/rally-data/:/rally-data:Z egonzalez90/rally-mitaka
[root@07766ba700e8 /]# 

Create a file called deploy.json with the admin info of your OpenStack environment

[root@07766ba700e8 /]# vi deploy.json

{
    "type": "ExistingCloud",
    "auth_url": "http://controller:5000/v2.0",
    "region_name": "RegionOne",
    "admin": {
        "username": "admin",
        "password": "my_password",
        "tenant_name": "admin"
    }
}

Create a deployment with the json we previously created

[root@07766ba700e8 /]# rally deployment create --file=deploy.json --name=existing
2016-06-15 09:42:25.428 25 INFO rally.deployment.engine [-] Deployment a5162111-02a5-458f-bb59-f822cab1aa93 | Starting:  OpenStack cloud deployment.
2016-06-15 09:42:25.478 25 INFO rally.deployment.engine [-] Deployment a5162111-02a5-458f-bb59-f822cab1aa93 | Completed: OpenStack cloud deployment.
+--------------------------------------+----------------------------+----------+------------------+--------+
| uuid                                 | created_at                 | name     | status           | active |
+--------------------------------------+----------------------------+----------+------------------+--------+
| a5162111-02a5-458f-bb59-f822cab1aa93 | 2016-06-15 09:42:25.391691 | existing | deploy->finished |        |
+--------------------------------------+----------------------------+----------+------------------+--------+
Using deployment: a5162111-02a5-458f-bb59-f822cab1aa93
~/.rally/openrc was updated

HINTS:
* To get your cloud resources, run:
        rally show [flavors|images|keypairs|networks|secgroups]

* To use standard OpenStack clients, set up your env by running:
        source ~/.rally/openrc
  OpenStack clients are now configured, e.g run:
        glance image-list

Source the openrc file rally has created with your user info and test if you can connect with glance

[root@07766ba700e8 /]# source ~/.rally/openrc

[root@07766ba700e8 /]# glance  image-list
+--------------------------------------+--------+
| ID                                   | Name   |
+--------------------------------------+--------+
| 1c4fc8a6-3ea7-433c-8ece-a14bbaf861e2 | cirros |
+--------------------------------------+--------+

Check deployment status

[root@07766ba700e8 /]# rally deployment check
keystone endpoints are valid and following services are available:
+-------------+----------------+-----------+
| services    | type           | status    |
+-------------+----------------+-----------+
| __unknown__ | volumev2       | Available |
| ceilometer  | metering       | Available |
| cinder      | volume         | Available |
| cloud       | cloudformation | Available |
| glance      | image          | Available |
| heat        | orchestration  | Available |
| keystone    | identity       | Available |
| neutron     | network        | Available |
| nova        | compute        | Available |
+-------------+----------------+-----------+
NOTE: '__unknown__' service name means that Keystone service catalog doesn't return name for this service and Rally can not identify service by its type. BUT you still can use such services with api_versions context, specifying type of service (execute `rally plugin show api_versions` for more details).

Create a test execution file, this test will check if nova can boot and delete some instances

[root@07766ba700e8 /]# vi execution.json

{
  "NovaServers.boot_and_delete_server": [
    {
      "runner": {
        "type": "constant", 
        "concurrency": 2, 
        "times": 10
      }, 
      "args": {
        "force_delete": false, 
        "flavor": {
          "name": "m1.tiny"
        }, 
        "image": {
          "name": "cirros"
        }
      }, 
      "context": {
        "users": {
          "project_domain": "default", 
          "users_per_tenant": 2, 
          "tenants": 3, 
          "resource_management_workers": 30, 
          "user_domain": "default"
        }
      }
    }
  ]
}

Run the task with the following command

[root@07766ba700e8 /]# rally task start execution.json
--------------------------------------------------------------------------------
 Preparing input task
--------------------------------------------------------------------------------

Input task is:
{
    "NovaServers.boot_and_delete_server": [
        {
            "args": {
                "flavor": {
                    "name": "m1.tiny"
                },
                "image": {
                    "name": "cirros"
                },
                "force_delete": false
            },
            "runner": {
                "type": "constant",
                "times": 10,
                "concurrency": 2
            },
            "context": {
                "users": {
                    "tenants": 3,
                    "users_per_tenant": 2
                }
            }
        }
    ]
}

Task syntax is correct :)
2016-06-15 09:48:11.556 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation.
2016-06-15 09:48:11.579 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation of scenarios names.
2016-06-15 09:48:11.581 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation of scenarios names.
2016-06-15 09:48:11.581 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation of syntax.
2016-06-15 09:48:11.587 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation of syntax.
2016-06-15 09:48:11.588 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation of semantic.
2016-06-15 09:48:11.588 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation check cloud.
2016-06-15 09:48:11.694 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation check cloud.
2016-06-15 09:48:11.700 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Enter context: `users`
2016-06-15 09:48:12.004 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Enter context: `users`
2016-06-15 09:48:12.106 101 WARNING rally.task.types [-] FlavorResourceType is deprecated in Rally v0.3.2; use the equivalent resource plugin name instead
2016-06-15 09:48:12.207 101 WARNING rally.task.types [-] ImageResourceType is deprecated in Rally v0.3.2; use the equivalent resource plugin name instead
2016-06-15 09:48:12.395 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Exit context: `users`
2016-06-15 09:48:13.546 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Exit context: `users`
2016-06-15 09:48:13.546 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation of semantic.
2016-06-15 09:48:13.547 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation.
Task config is valid :)
--------------------------------------------------------------------------------
 Task  137eb997-d1f8-4d3f-918a-8aec3db7500f: started
--------------------------------------------------------------------------------

Benchmarking... This can take a while...

To track task status use:

        rally task status
        or
        rally task detailed

Using task: 137eb997-d1f8-4d3f-918a-8aec3db7500f
2016-06-15 09:48:13.555 101 INFO rally.api [-] Benchmark Task 137eb997-d1f8-4d3f-918a-8aec3db7500f on Deployment a5162111-02a5-458f-bb59-f822cab1aa93
2016-06-15 09:48:13.558 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Benchmarking.
2016-06-15 09:48:13.586 101 INFO rally.task.engine [-] Running benchmark with key:
{
  "kw": {
    "runner": {
      "type": "constant",
      "concurrency": 2,
      "times": 10
    },
    "args": {
      "force_delete": false,
      "flavor": {
        "name": "m1.tiny"
      },
      "image": {
        "name": "cirros"
      }
    },
    "context": {
      "users": {
        "users_per_tenant": 2,
        "tenants": 3
      }
    }
  },
  "name": "NovaServers.boot_and_delete_server",
  "pos": 0
}
2016-06-15 09:48:13.592 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Enter context: `users`
2016-06-15 09:48:14.994 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Enter context: `users`
2016-06-15 09:48:15.244 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 0 START
2016-06-15 09:48:15.245 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 1 START
2016-06-15 09:48:16.975 292 WARNING rally.common.logging [-] 'wait_for' is deprecated in Rally v0.1.2: Use wait_for_status instead.
2016-06-15 09:48:17.095 293 WARNING rally.common.logging [-] 'wait_for' is deprecated in Rally v0.1.2: Use wait_for_status instead.
2016-06-15 09:49:21.024 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 0 END: OK
2016-06-15 09:49:21.028 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 2 START
2016-06-15 09:49:32.109 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 1 END: OK
2016-06-15 09:49:32.112 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 3 START
2016-06-15 09:49:41.504 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 2 END: OK
2016-06-15 09:49:41.508 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 4 START
2016-06-15 09:49:52.455 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 3 END: OK
2016-06-15 09:49:52.462 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 5 START
2016-06-15 09:50:01.907 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 4 END: OK
2016-06-15 09:50:01.918 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 6 START
2016-06-15 09:50:12.692 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 5 END: OK
2016-06-15 09:50:12.694 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 7 START
2016-06-15 09:50:23.122 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 6 END: OK
2016-06-15 09:50:23.131 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 8 START
2016-06-15 09:50:33.322 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 7 END: OK
2016-06-15 09:50:33.332 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 9 START
2016-06-15 09:50:43.285 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 8 END: OK
2016-06-15 09:50:53.422 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 9 END: OK
2016-06-15 09:50:53.436 101 INFO rally.plugins.openstack.context.cleanup.user [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  user resources cleanup
2016-06-15 09:50:55.244 101 INFO rally.plugins.openstack.context.cleanup.user [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: user resources cleanup
2016-06-15 09:50:55.245 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Exit context: `users`
2016-06-15 09:50:57.438 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Exit context: `users`
2016-06-15 09:50:58.023 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Benchmarking.

--------------------------------------------------------------------------------
Task 137eb997-d1f8-4d3f-918a-8aec3db7500f: finished
--------------------------------------------------------------------------------

test scenario NovaServers.boot_and_delete_server
args position 0
args values:
{
  "runner": {
    "type": "constant",
    "concurrency": 2,
    "times": 10
  },
  "args": {
    "force_delete": false,
    "flavor": {
      "name": "m1.tiny"
    },
    "image": {
      "name": "cirros"
    }
  },
  "context": {
    "users": {
      "project_domain": "default",
      "users_per_tenant": 2,
      "tenants": 3,
      "resource_management_workers": 30,
      "user_domain": "default"
    }
  }
}

+-----------------------------------------------------------------------------------------------------------------------+
|                                                 Response Times (sec)                                                  |
+--------------------+-----------+--------------+--------------+--------------+-----------+-----------+---------+-------+
| Action             | Min (sec) | Median (sec) | 90%ile (sec) | 95%ile (sec) | Max (sec) | Avg (sec) | Success | Count |
+--------------------+-----------+--------------+--------------+--------------+-----------+-----------+---------+-------+
| nova.boot_server   | 17.84     | 18.158       | 64.433       | 69.419       | 74.405    | 28.299    | 100.0%  | 10    |
| nova.delete_server | 2.24      | 2.275        | 2.454        | 2.456        | 2.458     | 2.317     | 100.0%  | 10    |
| total              | 20.09     | 20.437       | 66.888       | 71.875       | 76.863    | 30.616    | 100.0%  | 10    |
+--------------------+-----------+--------------+--------------+--------------+-----------+-----------+---------+-------+

Load duration: 158.199862003
Full duration: 163.846753836

HINTS:
* To plot HTML graphics with this data, run:
        rally task report 137eb997-d1f8-4d3f-918a-8aec3db7500f --out output.html

* To generate a JUnit report, run:
        rally task report 137eb997-d1f8-4d3f-918a-8aec3db7500f --junit --out output.xml

* To get raw JSON output of task results, run:
        rally task results 137eb997-d1f8-4d3f-918a-8aec3db7500f

After a while, you will receive an output execution resume, you can export to a report file with the following command in a pretty style report.
Use the volume we created with the Docker Host to save report files.

[root@07766ba700e8 /]# rally task report 137eb997-d1f8-4d3f-918a-8aec3db7500f --html-static --out /rally-data/output.html

Open the output file form a Web browser and review the report.
Selection_002
Regards

1 2 3
%d bloggers like this: