Spacewalk (Red Hat Satellite v5) in a Docker container PoC

Spacewalk was the upstream project to provide a Linux systems management layer on which Red Hat Satellite was based, was based at least until RH Satellite version 5. Newer versions are not anymore based on Spacewalk, instead Satellite is a federation of several upstream open source projects, including Katello, Foreman, Pulp, and Candlepin.

Some weeks ago, a friend asked me if I knew a Docker container image for Satellite.
I have not found any image. What I found was some Spacewalk images, but sadly none of them worked for me.
I decided to create an image for this purpose.

While developing the image, I found serious troubles to make it run with systemd (I’m a fan of systemd, but not inside containers yet).
The result was a semi functional working image. I said semi functional because some Spacewalk features are not working (probably an issue with systemd again).
The main problem was that spacewalk-setup script starts and uses systemd to configure the database and the other needed services, that’s OK in a VM but not in a container.
So i needed to hack into postgres setup and start the services with the typical command --config-file file.conf executed from supervisord as Docker entrypoint.
Currently there is an issue with osa-dispatcher, on which I can’t find a fix to make it run.

This image is primarily created just for test Spacewalk interface and be more comfortable with it aka testing/development purposes, or just to have fun hacking with Docker containers.

Now, I’m going to make a short description of what the Dockerfile makes and then start the container.
Have fun.

I used centos as image base for this PoC

FROM centos:7 

Typical Maintainer line

MAINTAINER Eduardo Gonzalez Gutierrez 

Add jpackage repo which provides Java packages for Linux

COPY jpackage-generic.repo /etc/yum.repos.d/jpackage-generic.repo

Install EPEL and Spacewalk repositories, after install, clean all stored cache to minimize image size

RUN yum install -y http://yum.spacewalkproject.org/2.5/RHEL/7/x86_64/spacewalk-repo-2.5-3.el7.noarch.rpm \
        epel-release && \
        yum clean all

Import Keys to allow installation from these repositories

RUN rpm --import http://www.jpackage.org/jpackage.asc && \
    rpm --import https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7 && \
    rpm --import http://yum.spacewalkproject.org/RPM-GPG-KEY-spacewalk-2015 && \
    yum clean all

Install spacewalk and supervisord packages

RUN yum -y install \
        spacewalk-setup-postgresql \
        spacewalk-postgresql \
        supervisor  \
        yum clean all

Copy the example file used to sync spacewalk database in a later step

COPY answerfile.txt /tmp/answerfile.txt

Open necessary ports

EXPOSE 80 443 5222 68 69

Change to postgres user

USER postgres

Initialize the database

RUN /usr/bin/pg_ctl initdb  -D /var/lib/pgsql/data/

Create spacewalk database, user, role and create pltclu language

RUN /usr/bin/pg_ctl start -D /var/lib/pgsql/data/  -w -t 300 && \
     psql -c 'CREATE DATABASE spaceschema' && \
     psql -c "CREATE USER spaceuser WITH PASSWORD 'spacepw'" && \
     psql -c 'ALTER ROLE spaceuser SUPERUSER' && \
     createlang pltclu spaceschema

Change to root user

USER root

Start the database and execute spacewalk configuration script

RUN su -c "/usr/bin/pg_ctl start -D /var/lib/pgsql/data/  -w -t 300" postgres && \
    su -c "spacewalk-setup --answer-file=/tmp/answerfile.txt --skip-db-diskspace-check --skip-db-install" root ; exit 0

Copy supervisord configuration

ADD supervisord.conf /etc/supervisord.d/supervisord.conf

Use supervisord command to start all services at container launch time

ENTRYPOINT supervisord -c /etc/supervisord.d/supervisord.conf

You can check or download the source code at GitHub https://github.com/egonzalez90/docker-spacewalk

I uploaded the image to DockerHub, which is auto-build from my GitHub repository, you can find it with the following command.

[egonzalez@localhost ~]$ docker search spacewalk
INDEX       NAME                                       DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
docker.io   docker.io/ruo91/spacewalk                  Spacewalk is an open source Linux systems ...   3                    [OK]
docker.io   docker.io/jamesnetherton/spacewalk         Spacewalk running under Docker                  1                    
docker.io   docker.io/coffmant/spacewalk-docker        Spacewalk                                       0                    [OK]
docker.io   docker.io/csabakollar/spacewalk            Spacewalk 2.4 in a CentOS6 container            0                    
docker.io   docker.io/egonzalez90/spacewalk            Spacewalk docker image                          0                    [OK]
docker.io   docker.io/jdostal/spacewalk-clients        Repository containing spacewalk-clients         0                    
docker.io   docker.io/jhutar/spacewalk-client                                                          0                    
docker.io   docker.io/norus/spacewalk-reposync                                                         0                    
docker.io   docker.io/pajinek/spacewalk-client                                                         0                    [OK]
docker.io   docker.io/perfectweb/spacewalk             spacewalk                                       0                    [OK]
docker.io   docker.io/researchiteng/docker-spacewalk   spacewalk is the open source version of Re...   0                    [OK]
docker.io   docker.io/varhoo/spacewalk-proxy                                                           0                    [OK]

To start the container use the following command. If you don’t have the image locally, it will download the image from DockerHub

[egonzalez@localhost ~]$ docker run -d --privileged=True egonzalez90/spacewalk
Unable to find image 'egonzalez90/spacewalk:latest' locally
Trying to pull repository docker.io/egonzalez90/spacewalk ... 
latest: Pulling from docker.io/egonzalez90/spacewalk
a3ed95caeb02: Already exists 
da71393503ec: Already exists 
519093688e2c: Pull complete 
97bbffaa9fc9: Pull complete 
63bfb115f62d: Pull complete 
929bbb68aff9: Pull complete 
532bc4af8e1a: Pull complete 
3eb667dda9ee: Pull complete 
275894897aa4: Pull complete 
93bcddf9cedb: Pull complete 
266c3b70754f: Pull complete 
Digest: sha256:a4dd98548f9dbb405fb4c6bb4a2a07b83d5f2bf730f29f71913b72876b1a61ab
Status: Downloaded newer image for docker.io/egonzalez90/spacewalk:latest
ded4a8b7eb1ee61fecc8ddc2eb1b092917a361bc36f7f752b32d76e79501d70a

Now you have the container running, check if all the ports are properly exposed

[egonzalez@localhost ~]$ docker ps --latest --format 'table {{.ID}}\t{{.Image}}\t{{.Ports}}'
CONTAINER ID        IMAGE                   PORTS
ded4a8b7eb1e        egonzalez90/spacewalk   68-69/tcp, 80/tcp, 443/tcp, 5222/tcp

Get the container IP address in order to enter from a Web Browser

[egonzalez@localhost ~]$ docker inspect ded4a8b7eb1e | egrep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "172.17.0.3",
                    "IPAddress": "172.17.0.3",

Open A browser and go to the container IP address, if you use HTTP, by default it will redirect you to HTTPS.
The container uses an auto-signed SSL certificate, you have to add an exception in the Browser you use to allow connections to Spacewalk.
Once in the Welcome page, create an Organization.

Now you are in Spacewalk and can play/test some features.
Selection_003

There is an issue I was not able to fix, so osa-dispatcher and some other features will not work with this image.
If someone can give me an input to fix the issue it will appreciated.

[egonzalez@localhost ~]$ docker logs ded4a8b7eb1e | egrep FATAL
2016-07-12 18:13:32,220 INFO gave up: osa-dispatcher entered FATAL state, too many start retries too quickly

Thanks for your time and hopes this image at least serves you to learn and play with the interface.

Regards, Eduardo Gonzalez

MidoNet Integration with OpenStack Mitaka

MidoNet is an Open Source network virtualization software for IaaS infrastructure.
It decouples your IaaS cloud from your network hardware, creating an intelligent software abstraction layer between your end hosts and your physical network.
This network abstraction layer allows the cloud operator to move what has traditionally been hardware-based network appliances into a software-based multi-tenant virtual domain.

This definition from MidoNet documentation explains what MidoNet is and what MidoNet does.

At this I will post cover my experiences integrating MidoNet with OpenStack.
I used the following configurations:

All servers have CentOS 7.2 installed
OpenStack has been previously installed from RDO packages with multinode Packstack

  • x3 NSDB nodes (Casandra and Zookeeper services)
  • x2 Gateway Nodes (Midolman Agent)
  • x1 OpenStack Controller (MidoNet Cluster)
  • x1 OpenStack compute node (Midolman Agent)


NSDB NODES

Disable SElinux

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux

Install OpenStack Mitaka release repository

sudo yum install -y centos-release-openstack-mitaka

Add Cassandra repository

cat <<EOF>/etc/yum.repos.d/datastax.repo
[datastax]
name = DataStax Repo for Apache Cassandra
baseurl = http://rpm.datastax.com/community
enabled = 1
gpgcheck = 1
gpgkey = https://rpm.datastax.com/rpm/repo_key
EOF

Add Midonet repository

cat <<EOF>/etc/yum.repos.d/midonet.repo
[midonet]
name=MidoNet
baseurl=http://builds.midonet.org/midonet-5.2/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-openstack-integration]
name=MidoNet OpenStack Integration
baseurl=http://builds.midonet.org/openstack-mitaka/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-misc]
name=MidoNet 3rd Party Tools and Libraries
baseurl=http://builds.midonet.org/misc/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key
EOF

Clean repo cache and update packages

yum clean all
yum update

Zookeeper Configuration
Install Zookeeper, java and dependencies

yum install -y java-1.7.0-openjdk-headless zookeeper zkdump nmap-ncat

Edit zookeeper configuration file

vi /etc/zookeeper/zoo.cfg

Add all NSDB nodes at the configuration file

server.1=nsdb1:2888:3888
server.2=nsdb2:2888:3888
server.3=nsdb3:2888:3888
autopurge.snapRetainCount=10
autopurge.purgeInterval =12

Create zookeeper folder on which zookeeper will store data, change the owner to zookeeper user

mkdir /var/lib/zookeeper/data
chown zookeeper:zookeeper /var/lib/zookeeper/data

Create myid file at zookeeper data folder, the ID should match with the NSDB node number, insert that number as follows:

#NSDB1
echo 1 > /var/lib/zookeeper/data/myid
#NSDB2
echo 2 > /var/lib/zookeeper/data/myid
#NSDB3
echo 3 > /var/lib/zookeeper/data/myid

Create java folder and create a softlink to it

mkdir -p /usr/java/default/bin/
ln -s /usr/lib/jvm/jre-1.7.0-openjdk/bin/java /usr/java/default/bin/java

Start and enable Zookeeper service

systemctl enable zookeeper.service
systemctl start zookeeper.service

Test if zookeeper is working locally

echo ruok | nc 127.0.0.1 2181
imok

Test if zookeeper is working at NSDB remote nodes

echo stat | nc nsdb3 2181

Zookeeper version: 3.4.5--1, built on 02/08/2013 12:25 GMT
Clients:
 /192.168.100.172:35306[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x100000000
Mode: follower
Node count: 4

Cassandra configuration
Install Java and Cassandra dependencies

yum install -y java-1.8.0-openjdk-headless dsc22

Edit cassandra yaml file

vi /etc/cassandra/conf/cassandra.yaml

Change cluster_name to midonet
Configure seed_provider seeds to match all NSDB nodes
Configure listen_address and rpc_address to match the hostname of the self node

cluster_name: 'midonet'
....
seed_provider:
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
      parameters:
          - seeds: "nsdb1,nsdb2,nsdb3"

listen_address: nsdb1
rpc_address: nsdb1

Edit cassandra’s init script in order to fix a bug in the init script

vi /etc/init.d/cassandra

Add the next two lines after #Casandra startup

case "$1" in
    start)
        # Cassandra startup
        echo -n "Starting Cassandra: "
        mkdir -p /var/run/cassandra #Add this line
        chown cassandra:cassandra /var/run/cassandra #Add this line
        su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
        retval=$?
        [ $retval -eq 0 ] && touch $lock_file
        echo "OK"
        ;;

Start and enable Cassandra service

systemctl enable cassandra.service
systemctl start cassandra.service

Check if all NSDB nodes join the cluster

nodetool --host 127.0.0.1 status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address          Load       Tokens       Owns (effective)  Host ID                               Rack
UN  192.168.100.172  89.1 KB    256          70.8%             3f1ecedd-8caf-4938-84ad-8614d2134557  rack1
UN  192.168.100.224  67.64 KB   256          60.7%             cb36c999-a6e1-4d98-a4dd-d4230b41df08  rack1
UN  192.168.100.195  25.78 KB   256          68.6%             4758bae8-9300-4e57-9a61-5b1107082964  rack1

Configure OpenStack Controller Nodes (On which Neutron Server is running)

Disable SElinux

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux

Install OpenStack Mitaka release repository

sudo yum install -y centos-release-openstack-mitaka

Add Midonet Repository

cat <<EOF>/etc/yum.repos.d/midonet.repo
[midonet]
name=MidoNet
baseurl=http://builds.midonet.org/midonet-5.2/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-openstack-integration]
name=MidoNet OpenStack Integration
baseurl=http://builds.midonet.org/openstack-mitaka/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-misc]
name=MidoNet 3rd Party Tools and Libraries
baseurl=http://builds.midonet.org/misc/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key
EOF

Clean repos cache and update the system

yum clean all
yum update

Create an OpenStack user for MidoNet, change the password to match your own

# openstack user create --password temporal midonet
+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | ac25c5a77e7c4e4598ccadea89e09969 |
| name     | midonet                          |
| username | midonet                          |
+----------+----------------------------------+

Add admin role at tenant services to Midonet user

# openstack role add --project services --user midonet admin
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | None                             |
| id        | bca2c6e1f3da42b0ba82aee401398a8a |
| name      | admin                            |
+-----------+----------------------------------+

Create MidoNet service at Keystone

# openstack service create --name midonet --description "MidoNet API Service" midonet
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | MidoNet API Service              |
| enabled     | True                             |
| id          | 499059c4a3a040cfb632411408a2be4c |
| name        | midonet                          |
| type        | midonet                          |
+-------------+----------------------------------+

Clean up neutron server
Stop neutron services

openstack-service stop neutron

Remove neutron database and recreate it again

mysql -u root -p
DROP DATABASE neutron;
Query OK, 157 rows affected (11.50 sec)

MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'ab4f81b1040a495e';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'ab4f81b1040a495e';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit
Bye

Remove plugin.ini symbolic link to ml2_conf.ini

#rm /etc/neutron/plugin.ini 
rm: remove symbolic link ‘/etc/neutron/plugin.ini’? y

Remove br-tun tunnel used by neutron in all the nodes

ovs-vsctl del-br br-tun

Install MidoNet packages and remove ml2 package

yum install -y openstack-neutron python-networking-midonet python-neutronclient
yum remove openstack-neutron-ml2

Make a backup of neutron configuration file

cp /etc/neutron/neutron.conf neutron.conf.bak

Edit neutron configuration file

vi /etc/neutron/neutron.conf

Most of the options are already configured by our older neutron configuration, change the ones who apply to match this configuration

[DEFAULT]
core_plugin = midonet.neutron.plugin_v2.MidonetPluginV2
service_plugins = midonet.neutron.services.l3.l3_midonet.MidonetL3ServicePlugin
dhcp_agent_notification = False
allow_overlapping_ips = True
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
nova_url = http://controller:8774/v2

[database]
connection = mysql+pymysql://neutron:ab4f81b1040a495e@controller/neutron

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = guest
rabbit_password = guest

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
admin_user=neutron
admin_tenant_name=services
identity_uri=http://controller:35357
admin_password=d88f0bd060d64c33

[nova]
region_name = RegionOne
auth_url = http://controller:35357
auth_type = password
password = 9ca36d15e4824d93
project_domain_id = default
project_name = services
tenant_name = services
user_domain_id = default
username = nova

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

At my deployment these are the options I had to change to configure midonet

diff /etc/neutron/neutron.conf neutron.conf.bak 
33c33
< core_plugin = midonet.neutron.plugin_v2.MidonetPluginV2
---
> core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
37c37
< service_plugins = midonet.neutron.services.l3.l3_midonet.MidonetL3ServicePlugin
---
> service_plugins =router
120c120
< dhcp_agent_notification = False
---
> #dhcp_agent_notification = true
1087c1087,1088
< lock_path = /var/lib/neutron/tmp
---
> lock_path = $state_path/lock
> 

Create midonet plugins folder

mkdir /etc/neutron/plugins/midonet

Create a file called midonet.ini

vi /etc/neutron/plugins/midonet/midonet.ini

Configure midonet.ini file to match your own configuration options

[MIDONET]
# MidoNet API URL
midonet_uri = http://controller:8181/midonet-api
# MidoNet administrative user in Keystone
username = midonet
password = temporal
# MidoNet administrative user's tenant
project_id = services

Create a symbolic link from midonet.ini to plugin.ini

ln -s /etc/neutron/plugins/midonet/midonet.ini /etc/neutron/plugin.ini

Sync and populate database tables with Midonet plugin

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/midonet/midonet.ini upgrade head" neutron
su -s /bin/sh -c "neutron-db-manage --subproject networking-midonet upgrade head" neutron

Restart nova api and neutron server services

systemctl restart openstack-nova-api.service
systemctl restart neutron-server

Install midonet cluster package

yum install -y midonet-cluster

Configure midonet.conf file

vi /etc/midonet/midonet.conf

Add all NSDB nodes at zookeeper_hosts

[zookeeper]
zookeeper_hosts = nsdb1:2181,nsdb2:2181,nsdb3:2181

Configure midonet to make use of NSDB nodes as Zookeeper and cassandra hosts

cat << EOF | mn-conf set -t default
zookeeper {
    zookeeper_hosts = "nsdb1:2181,nsdb2:2181,nsdb3:2181"
}

cassandra {
    servers = "nsdb1,nsdb2,nsdb3"
}
EOF

Set cassandra replication factor to 3

echo "cassandra.replication_factor : 3" | mn-conf set -t default

Grab your admin token

#egrep ^admin_token /etc/keystone/keystone.conf 
admin_token = 7b84d89b32c34b71a697eb1a270807ab

Configure Midonet to auth with keystone

cat << EOF | mn-conf set -t default
cluster.auth {
    provider_class = "org.midonet.cluster.auth.keystone.KeystoneService"
    admin_role = "admin"
    keystone.tenant_name = "admin"
    keystone.admin_token = "7b84d89b32c34b71a697eb1a270807ab"
    keystone.host = controller
    keystone.port = 35357
}
EOF

Start and enable midonet cluster service

systemctl enable midonet-cluster.service
systemctl start midonet-cluster.service

Install midonet CLI

yum install -y python-midonetclient

Create a file at you home directory with midonet auth info

#vi ~/.midonetrc

[cli]
api_url = http://controller:8181/midonet-api
username = admin
password = temporal
project_id = admin

Configure Compute nodes

Disable SElinux

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux

Install OpenStack Mitaka release repository

sudo yum install -y centos-release-openstack-mitaka

Add Midonet repository

cat <<EOF>/etc/yum.repos.d/midonet.repo
[midonet]
name=MidoNet
baseurl=http://builds.midonet.org/midonet-5.2/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-openstack-integration]
name=MidoNet OpenStack Integration
baseurl=http://builds.midonet.org/openstack-mitaka/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-misc]
name=MidoNet 3rd Party Tools and Libraries
baseurl=http://builds.midonet.org/misc/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key
EOF

Clean repos cache and update the system

yum clean all
yum update

Edit qemu.conf

vi /etc/libvirt/qemu.conf

Configure with the following options, by default all these options are commented, you can paste it all wherever you want

user = "root"
group = "root"

cgroup_device_acl = [
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
    "/dev/net/tun"
]

Restart libvirtd service

systemctl restart libvirtd.service

Install nova-network package

yum install -y openstack-nova-network

Disable Nova Network service and restart Nova compute service

systemctl disable openstack-nova-network.service
systemctl restart openstack-nova-compute.service

Install Midolman agent and java packages

yum install -y java-1.8.0-openjdk-headless midolman

Configure midolman.conf

vi /etc/midolman/midolman.conf

Add all nsdb nodes as zookeeper hosts

[zookeeper]
zookeeper_hosts = nsdb1:2181,nsdb2:2181,nsdb3:2181

Configure each compute node with an appropiate flavor located at /etc/midolman/ folder, the have different hardware resources configured, use the one that better match your compute host capabilities

mn-conf template-set -h local -t agent-compute-medium
cp /etc/midolman/midolman-env.sh.compute.medium /etc/midolman/midolman-env.sh

Configure metadata, issue the following commands only once, it will automatically populate the configuration to all midonet agents

echo "agent.openstack.metadata.nova_metadata_url : \"http://controller:8775\"" | mn-conf set -t default
echo "agent.openstack.metadata.shared_secret : 2bfeb930a90d435d" | mn-conf set -t default
echo "agent.openstack.metadata.enabled : true" | mn-conf set -t default

Allow metadata trafic at iptables

iptables -I INPUT 1 -i metadata -j ACCEPT

Remove br-tun bridge

ovs-vsctl del-br br-tun

Start and enable midolman agent service

systemctl enable midolman.service
systemctl start midolman.service

Gateway nodes configuration

Disable SElinux

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux

Install OpenStack Mitaka release repository

sudo yum install -y centos-release-openstack-mitaka

Add Midonet repository

cat <<EOF>/etc/yum.repos.d/midonet.repo
[midonet]
name=MidoNet
baseurl=http://builds.midonet.org/midonet-5.2/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-openstack-integration]
name=MidoNet OpenStack Integration
baseurl=http://builds.midonet.org/openstack-mitaka/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-misc]
name=MidoNet 3rd Party Tools and Libraries
baseurl=http://builds.midonet.org/misc/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key
EOF

Clean repos cache and update the system

yum clean all
yum update

Install Midolman agent and java packages

yum install -y java-1.8.0-openjdk-headless midolman

Configure midolman.conf

vi /etc/midolman/midolman.conf

Add all nsdb nodes as zookeeper hosts

[zookeeper]
zookeeper_hosts = nsdb1:2181,nsdb2:2181,nsdb3:2181

Configure each gateway node with an appropiate flavor located at /etc/midolman/ folder, the have different hardware resources configured, use the one that better match your gateway host capabilities

mn-conf template-set -h local -t agent-gateway-medium
cp /etc/midolman/midolman-env.sh.gateway.medium /etc/midolman/midolman-env.sh

Grab the metadata shared secret located at nova.conf at any of your nova nodes

# egrep ^metadata_proxy_shared_secret /etc/nova/nova.conf 
metadata_proxy_shared_secret =2bfeb930a90d435d

Allow metadata trafic at iptables

iptables -I INPUT 1 -i metadata -j ACCEPT

Start and enable midolman agent service

systemctl enable midolman.service
systemctl start midolman.service

Configure encapsulation and register nodes
Enter to midonet CLI from a controller node

midonet-cli

Create the tunnel zone with VXLAN encapsulation

midonet> tunnel-zone create name tz type vxlan
tzone0
midonet> list tunnel-zone
tzone tzone0 name tz type vxlan

List hosts discovered by midonet, should be all the nodes where you configured midonet agents(midolman)

midonet> list host
host host0 name gateway2 alive true addresses fe80:0:0:0:0:11ff:fe00:1102,169.254.123.1,fe80:0:0:0:0:11ff:fe00:1101,127.0.0.1,0:0:0:0:0:0:0:1,192.168.200.176,fe80:0:0:0:5054:ff:fef9:b2a0,169.254.169.254,fe80:0:0:0:7874:d6ff:fe5b:dea8,192.168.100.227,fe80:0:0:0:5054:ff:fed9:9cc0,fe80:0:0:0:5054:ff:fe4a:e39b,192.168.1.86 flooding-proxy-weight 1 container-weight 1 container-limit no-limit enforce-container-limit false
host host1 name gateway1 alive true addresses 169.254.169.254,fe80:0:0:0:3cd1:23ff:feac:a3c2,192.168.1.87,fe80:0:0:0:5054:ff:fea8:da91,127.0.0.1,0:0:0:0:0:0:0:1,fe80:0:0:0:5054:ff:feec:92c1,192.168.200.232,fe80:0:0:0:0:11ff:fe00:1102,169.254.123.1,fe80:0:0:0:0:11ff:fe00:1101,192.168.100.141,fe80:0:0:0:5054:ff:fe20:30fb flooding-proxy-weight 1 container-weight 1 container-limit no-limit enforce-container-limit false
host host2 name compute1 alive true addresses fe80:0:0:0:0:11ff:fe00:1101,169.254.123.1,127.0.0.1,0:0:0:0:0:0:0:1,fe80:0:0:0:0:11ff:fe00:1102,192.168.100.173,fe80:0:0:0:5054:ff:fe06:161,fe80:0:0:0:5054:ff:fee3:eb48,192.168.200.251,fe80:0:0:0:5054:ff:fe8d:d22,192.168.1.93,169.254.169.254,fe80:0:0:0:48cb:adff:fe69:f07b flooding-proxy-weight 1 container-weight 1 container-limit no-limit enforce-container-limit false

Register each of the nodes at the VXLAN zone we created before

midonet> tunnel-zone tzone0 add member host host0 address 192.168.100.227
zone tzone0 host host0 address 192.168.100.227
midonet> tunnel-zone tzone0 add member host host1 address 192.168.100.141
zone tzone0 host host1 address 192.168.100.141
midonet> tunnel-zone tzone0 add member host host2 address 192.168.100.173
zone tzone0 host host2 address 192.168.100.173

Create Networks at Neutron
Create an external network

# neutron net-create ext-net --router:external
Created a new network:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| created_at            | 2016-07-03T14:47:30                  |
| description           |                                      |
| id                    | dc15245e-4391-4514-b489-8976373046a3 |
| is_default            | False                                |
| name                  | ext-net                              |
| port_security_enabled | True                                 |
| provider:network_type | midonet                              |
| router:external       | True                                 |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tags                  |                                      |
| tenant_id             | 2f7ee2716b3b4140be57b4a5b26401e3     |
| updated_at            | 2016-07-03T14:47:30                  |
+-----------------------+--------------------------------------+

Create an external subnet in the network we created before, use you own IP ranges to match your environment

# neutron subnet-create ext-net 192.168.200.0/24 --name ext-subnet \
  --allocation-pool start=192.168.200.225,end=192.168.200.240 \
  --disable-dhcp --gateway 192.168.200.1
Created a new subnet:
+-------------------+--------------------------------------------------------+
| Field             | Value                                                  |
+-------------------+--------------------------------------------------------+
| allocation_pools  | {"start": "192.168.200.225", "end": "192.168.200.240"} |
| cidr              | 192.168.200.0/24                                       |
| created_at        | 2016-07-03T14:50:46                                    |
| description       |                                                        |
| dns_nameservers   |                                                        |
| enable_dhcp       | False                                                  |
| gateway_ip        | 192.168.200.1                                          |
| host_routes       |                                                        |
| id                | 234dcc9a-2878-4799-b564-bf3a1bd52cad                   |
| ip_version        | 4                                                      |
| ipv6_address_mode |                                                        |
| ipv6_ra_mode      |                                                        |
| name              | ext-subnet                                             |
| network_id        | dc15245e-4391-4514-b489-8976373046a3                   |
| subnetpool_id     |                                                        |
| tenant_id         | 2f7ee2716b3b4140be57b4a5b26401e3                       |
| updated_at        | 2016-07-03T14:50:46                                    |
+-------------------+--------------------------------------------------------+

Create a tenant network and a subnet on it

# neutron net-create demo-net
Created a new network:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| created_at            | 2016-07-03T14:51:39                  |
| description           |                                      |
| id                    | 075ba699-dc4c-4625-8e0d-0a258a9aeb7d |
| name                  | demo-net                             |
| port_security_enabled | True                                 |
| provider:network_type | midonet                              |
| router:external       | False                                |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tags                  |                                      |
| tenant_id             | 2f7ee2716b3b4140be57b4a5b26401e3     |
| updated_at            | 2016-07-03T14:51:39                  |
+-----------------------+--------------------------------------+
# neutron subnet-create demo-net 10.0.20.0/24 --name demo-subnet 
Created a new subnet:
+-------------------+----------------------------------------------+
| Field             | Value                                        |
+-------------------+----------------------------------------------+
| allocation_pools  | {"start": "10.0.20.2", "end": "10.0.20.254"} |
| cidr              | 10.0.20.0/24                                 |
| created_at        | 2016-07-03T14:52:32                          |
| description       |                                              |
| dns_nameservers   |                                              |
| enable_dhcp       | True                                         |
| gateway_ip        | 10.0.20.1                                    |
| host_routes       |                                              |
| id                | b299d899-33a3-4bfa-aff4-fda071545bdf         |
| ip_version        | 4                                            |
| ipv6_address_mode |                                              |
| ipv6_ra_mode      |                                              |
| name              | demo-subnet                                  |
| network_id        | 075ba699-dc4c-4625-8e0d-0a258a9aeb7d         |
| subnetpool_id     |                                              |
| tenant_id         | 2f7ee2716b3b4140be57b4a5b26401e3             |
| updated_at        | 2016-07-03T14:52:32                          |
+-------------------+----------------------------------------------+

Create a tenant router

# neutron router-create router1
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| description           |                                      |
| external_gateway_info |                                      |
| id                    | 258942d8-9d82-4ebd-b829-c7bdfcc973f5 |
| name                  | router1                              |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | 2f7ee2716b3b4140be57b4a5b26401e3     |
+-----------------------+--------------------------------------+

Attach the tenant subnet interface we created before to the router

# neutron router-interface-add router1 demo-subnet
Added interface 06c85a56-368c-4d79-bbf0-4bb077f163e5 to router router1.

Set the external network as router gateway

# neutron router-gateway-set router1 ext-net
Set gateway for router router1

Now, you can create an instance at tenant network

# nova boot --flavor m1.tiny --image 80871834-29dd-4100-b038-f5f83f126204 --nic net-id=075ba699-dc4c-4625-8e0d-0a258a9aeb7d test1
+--------------------------------------+-----------------------------------------------------+
| Property                             | Value                                               |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                              |
| OS-EXT-AZ:availability_zone          |                                                     |
| OS-EXT-SRV-ATTR:host                 | -                                                   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000000a                                   |
| OS-EXT-STS:power_state               | 0                                                   |
| OS-EXT-STS:task_state                | scheduling                                          |
| OS-EXT-STS:vm_state                  | building                                            |
| OS-SRV-USG:launched_at               | -                                                   |
| OS-SRV-USG:terminated_at             | -                                                   |
| accessIPv4                           |                                                     |
| accessIPv6                           |                                                     |
| adminPass                            | q2Cq4kxePSLL                                        |
| config_drive                         |                                                     |
| created                              | 2016-07-03T15:46:19Z                                |
| flavor                               | m1.tiny (1)                                         |
| hostId                               |                                                     |
| id                                   | b8aa46f9-186c-4594-8428-f8dbb16a5e16                |
| image                                | cirros image (80871834-29dd-4100-b038-f5f83f126204) |
| key_name                             | -                                                   |
| metadata                             | {}                                                  |
| name                                 | test1                                               |
| os-extended-volumes:volumes_attached | []                                                  |
| progress                             | 0                                                   |
| security_groups                      | default                                             |
| status                               | BUILD                                               |
| tenant_id                            | 2f7ee2716b3b4140be57b4a5b26401e3                    |
| updated                              | 2016-07-03T15:46:20Z                                |
| user_id                              | a2482a91a1f14750b372445d28b07c75                    |
+--------------------------------------+-----------------------------------------------------+
# nova list
+--------------------------------------+-------+--------+------------+-------------+---------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks            |
+--------------------------------------+-------+--------+------------+-------------+---------------------+
| b8aa46f9-186c-4594-8428-f8dbb16a5e16 | test1 | ACTIVE | -          | Running     | demo-net=10.0.20.11 |
+--------------------------------------+-------+--------+------------+-------------+---------------------+

Ensure the instance gets IP and the metadata service is properly running

# nova console-log test1
...#Snipp from the output
Sending discover...
Sending select for 10.0.20.11...
Lease of 10.0.20.11 obtained, lease time 86400
cirros-ds 'net' up at 7.92
checking http://169.254.169.254/2009-04-04/instance-id
successful after 1/20 tries: up 8.22. iid=i-0000000a
...

If you login to the instance through VNC you should be able to ping another instances

Edge router configuration
Create a new router

# neutron router-create edge-router
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| description           |                                      |
| external_gateway_info |                                      |
| id                    | 5ecadb64-cb0d-4f95-a00e-aa1dd20a2012 |
| name                  | edge-router                          |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | 2f7ee2716b3b4140be57b4a5b26401e3     |
+-----------------------+--------------------------------------+

Attach the external subnet interface to the router

# neutron router-interface-add edge-router ext-subnet
Added interface e37f1986-c6b1-47f4-8268-02b837ceac17 to router edge-router.

Create an uplink network

# neutron net-create uplink-network --tenant_id admin --provider:network_type uplink
Created a new network:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| created_at            | 2016-07-03T14:57:15                  |
| description           |                                      |
| id                    | 77173ed4-6106-4515-af1c-3683897955f9 |
| name                  | uplink-network                       |
| port_security_enabled | True                                 |
| provider:network_type | uplink                               |
| router:external       | False                                |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tags                  |                                      |
| tenant_id             | admin                                |
| updated_at            | 2016-07-03T14:57:15                  |
+-----------------------+--------------------------------------+

Create a subnet in the uplink network

# neutron subnet-create --tenant_id admin --disable-dhcp --name uplink-subnet uplink-network 192.168.1.0/24
Created a new subnet:
+-------------------+--------------------------------------------------+
| Field             | Value                                            |
+-------------------+--------------------------------------------------+
| allocation_pools  | {"start": "192.168.1.2", "end": "192.168.1.254"} |
| cidr              | 192.168.1.0/24                                   |
| created_at        | 2016-07-03T15:06:28                              |
| description       |                                                  |
| dns_nameservers   |                                                  |
| enable_dhcp       | False                                            |
| gateway_ip        | 192.168.1.1                                      |
| host_routes       |                                                  |
| id                | 4e98e789-20d3-45fd-a3b5-9bcf02d8a832             |
| ip_version        | 4                                                |
| ipv6_address_mode |                                                  |
| ipv6_ra_mode      |                                                  |
| name              | uplink-subnet                                    |
| network_id        | 77173ed4-6106-4515-af1c-3683897955f9             |
| subnetpool_id     |                                                  |
| tenant_id         | admin                                            |
| updated_at        | 2016-07-03T15:06:28                              |
+-------------------+--------------------------------------------------+

Create a port for each of the gateway nodes, interface should match with the NIC you want to use for binding the gateway nodes and a IP address for the same purposes

# neutron port-create uplink-network --binding:host_id gateway1 --binding:profile type=dict interface_name=eth1 --fixed-ip ip_address=192.168.1.199
Created a new port:
+-----------------------+--------------------------------------------------------------------------------------+
| Field                 | Value                                                                                |
+-----------------------+--------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                 |
| allowed_address_pairs |                                                                                      |
| binding:host_id       | compute1                                                                             |
| binding:profile       | {"interface_name": "eth1"}                                                           |
| binding:vif_details   | {"port_filter": true}                                                                |
| binding:vif_type      | midonet                                                                              |
| binding:vnic_type     | normal                                                                               |
| created_at            | 2016-07-03T15:10:06                                                                  |
| description           |                                                                                      |
| device_id             |                                                                                      |
| device_owner          |                                                                                      |
| extra_dhcp_opts       |                                                                                      |
| fixed_ips             | {"subnet_id": "4e98e789-20d3-45fd-a3b5-9bcf02d8a832", "ip_address": "192.168.1.199"} |
| id                    | 7b4f54dd-2b41-42ba-9c5c-cda4640dc550                                                 |
| mac_address           | fa:16:3e:44:a8:c9                                                                    |
| name                  |                                                                                      |
| network_id            | 77173ed4-6106-4515-af1c-3683897955f9                                                 |
| port_security_enabled | True                                                                                 |
| security_groups       | 0cf3e33e-dbd6-4b42-a0bd-6679b5eed4e1                                                 |
| status                | ACTIVE                                                                               |
| tenant_id             | 2f7ee2716b3b4140be57b4a5b26401e3                                                     |
| updated_at            | 2016-07-03T15:10:06                                                                  |
+-----------------------+--------------------------------------------------------------------------------------+

Attach each of the ports to the edge router

# neutron router-interface-add edge-router port=7b4f54dd-2b41-42ba-9c5c-cda4640dc550
Added interface 7b4f54dd-2b41-42ba-9c5c-cda4640dc550 to router edge-router.

At this point you have to decide if use border routers with BGP enabled or static routes.
Use one of the following links to configure your use case:
https://docs.midonet.org/docs/latest/operations-guide/content/bgp_uplink_configuration.html
https://docs.midonet.org/docs/latest/operations-guide/content/static_setup.html

Issues I faced during configuration of Midonet

Midolman agent don't start:
It was caused because midolman-env.sh file has more RAM configured as the one of my server.
Edit the file to match your server resources

# egrep ^MAX_HEAP_SIZE /etc/midolman/midolman-env.sh
MAX_HEAP_SIZE="2048M"



Instances doesn't boot with the following error:

could not open /dev/net/tun: Permission denied

I had to remove br-tun bridges at ovs, if not, ovs locks the device and midolman cannot create the tunnel beetwen compute nodes and gateway nodes.

ovs-vsctl del-br br-tun



This post is my experience integrating Midonet into OpenStack, maybe some things are not correct, if you find any issue, please advise me to fix it.
Regards, Eduardo Gonzalez

Rally OpenStack benchmarking from Docker containers

OpenStack Rally is a project under the Big Tent umbrella with the mission of verify OpenStack environments to ensure SLAs under high loads or fail over scenarios, and cloud services verification. Rally can also be used to continuous integration and delivery tasks.

Why use Rally inside a Docker container? Rally is a service that is not commonly used in most environments, is a tool that is used when new infrastructure changes are made or when a SLAs review must be done, not make any sense have a service consuming infrastructure resources or block a server only for use under specific situations. Also, if your OpenStack infrastructure is automated, with a container you can have a nice integration with CI/CD tools like Jenkins.



Main reasons to use Rally inside Docker containers:

  • Quick tests/deployments of Rally tasks
  • Automated testing
  • Cost savings
  • Operators can execute tasks with their own computers, freeing infrastructure resources
  • Re-utilization of resources


Here you got my suggestions about how to use Rally inside Docker:

  • Create a new container(automatized or not by another tool)
  • Always use an external volume to store rally reports data
  • Execute Rally tasks
  • Export the reports to the volume shared with the Docker host
  • Kill the container


Let’s start with this quick guide:



Clone the repo i created with the Dockerfile

[egonzalez@localhost ~]$ git clone https://github.com/egonzalez90/docker-rally.git

Move to docker-rally directory

[egonzalez@localhost ~]$ cd docker-rally/

Create the Docker image

[egonzalez@localhost docker-rally]$ docker build -t egonzalez90/rally-mitaka .
Sending build context to Docker daemon  76.8 kB
Step 1 : FROM centos:7
 ---> 904d6c400333
Step 2 : MAINTAINER Eduardo Gonzalez Gutierrez <dabarren@gmail.com>
 ---> Using cache
 ---> ee93bc7747e1
Step 3 : RUN yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-mitaka/rdo-release-mitaka-3.noarch.rpm
 ---> Using cache
 ---> 8492ab9ee261
Step 4 : RUN yum update -y
 ---> Using cache
 ---> 1374340eb39a
Step 5 : RUN yum -y install         openstack-rally         gcc         libffi-devel         python-devel         openssl-devel         gmp-devel         libxml2-devel         libxslt-devel         postgresql-devel         redhat-rpm-config         wget         openstack-selinux         openstack-utils &&         yum clean all
 ---> Using cache
 ---> 9b65e4a281be
Step 6 : RUN rally-manage --config-file /etc/rally/rally.conf db recreate
 ---> Using cache
 ---> dc4f3dbc1505
Successfully built dc4f3dbc1505

Start rally container with a pseudo-tty and a volume to store rally execution data

[egonzalez@localhost docker-rally]$ docker run -ti -v /opt/rally-data/:/rally-data:Z egonzalez90/rally-mitaka
[root@07766ba700e8 /]# 

Create a file called deploy.json with the admin info of your OpenStack environment

[root@07766ba700e8 /]# vi deploy.json

{
    "type": "ExistingCloud",
    "auth_url": "http://controller:5000/v2.0",
    "region_name": "RegionOne",
    "admin": {
        "username": "admin",
        "password": "my_password",
        "tenant_name": "admin"
    }
}

Create a deployment with the json we previously created

[root@07766ba700e8 /]# rally deployment create --file=deploy.json --name=existing
2016-06-15 09:42:25.428 25 INFO rally.deployment.engine [-] Deployment a5162111-02a5-458f-bb59-f822cab1aa93 | Starting:  OpenStack cloud deployment.
2016-06-15 09:42:25.478 25 INFO rally.deployment.engine [-] Deployment a5162111-02a5-458f-bb59-f822cab1aa93 | Completed: OpenStack cloud deployment.
+--------------------------------------+----------------------------+----------+------------------+--------+
| uuid                                 | created_at                 | name     | status           | active |
+--------------------------------------+----------------------------+----------+------------------+--------+
| a5162111-02a5-458f-bb59-f822cab1aa93 | 2016-06-15 09:42:25.391691 | existing | deploy->finished |        |
+--------------------------------------+----------------------------+----------+------------------+--------+
Using deployment: a5162111-02a5-458f-bb59-f822cab1aa93
~/.rally/openrc was updated

HINTS:
* To get your cloud resources, run:
        rally show [flavors|images|keypairs|networks|secgroups]

* To use standard OpenStack clients, set up your env by running:
        source ~/.rally/openrc
  OpenStack clients are now configured, e.g run:
        glance image-list

Source the openrc file rally has created with your user info and test if you can connect with glance

[root@07766ba700e8 /]# source ~/.rally/openrc

[root@07766ba700e8 /]# glance  image-list
+--------------------------------------+--------+
| ID                                   | Name   |
+--------------------------------------+--------+
| 1c4fc8a6-3ea7-433c-8ece-a14bbaf861e2 | cirros |
+--------------------------------------+--------+

Check deployment status

[root@07766ba700e8 /]# rally deployment check
keystone endpoints are valid and following services are available:
+-------------+----------------+-----------+
| services    | type           | status    |
+-------------+----------------+-----------+
| __unknown__ | volumev2       | Available |
| ceilometer  | metering       | Available |
| cinder      | volume         | Available |
| cloud       | cloudformation | Available |
| glance      | image          | Available |
| heat        | orchestration  | Available |
| keystone    | identity       | Available |
| neutron     | network        | Available |
| nova        | compute        | Available |
+-------------+----------------+-----------+
NOTE: '__unknown__' service name means that Keystone service catalog doesn't return name for this service and Rally can not identify service by its type. BUT you still can use such services with api_versions context, specifying type of service (execute `rally plugin show api_versions` for more details).

Create a test execution file, this test will check if nova can boot and delete some instances

[root@07766ba700e8 /]# vi execution.json

{
  "NovaServers.boot_and_delete_server": [
    {
      "runner": {
        "type": "constant", 
        "concurrency": 2, 
        "times": 10
      }, 
      "args": {
        "force_delete": false, 
        "flavor": {
          "name": "m1.tiny"
        }, 
        "image": {
          "name": "cirros"
        }
      }, 
      "context": {
        "users": {
          "project_domain": "default", 
          "users_per_tenant": 2, 
          "tenants": 3, 
          "resource_management_workers": 30, 
          "user_domain": "default"
        }
      }
    }
  ]
}

Run the task with the following command

[root@07766ba700e8 /]# rally task start execution.json
--------------------------------------------------------------------------------
 Preparing input task
--------------------------------------------------------------------------------

Input task is:
{
    "NovaServers.boot_and_delete_server": [
        {
            "args": {
                "flavor": {
                    "name": "m1.tiny"
                },
                "image": {
                    "name": "cirros"
                },
                "force_delete": false
            },
            "runner": {
                "type": "constant",
                "times": 10,
                "concurrency": 2
            },
            "context": {
                "users": {
                    "tenants": 3,
                    "users_per_tenant": 2
                }
            }
        }
    ]
}

Task syntax is correct :)
2016-06-15 09:48:11.556 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation.
2016-06-15 09:48:11.579 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation of scenarios names.
2016-06-15 09:48:11.581 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation of scenarios names.
2016-06-15 09:48:11.581 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation of syntax.
2016-06-15 09:48:11.587 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation of syntax.
2016-06-15 09:48:11.588 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation of semantic.
2016-06-15 09:48:11.588 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation check cloud.
2016-06-15 09:48:11.694 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation check cloud.
2016-06-15 09:48:11.700 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Enter context: `users`
2016-06-15 09:48:12.004 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Enter context: `users`
2016-06-15 09:48:12.106 101 WARNING rally.task.types [-] FlavorResourceType is deprecated in Rally v0.3.2; use the equivalent resource plugin name instead
2016-06-15 09:48:12.207 101 WARNING rally.task.types [-] ImageResourceType is deprecated in Rally v0.3.2; use the equivalent resource plugin name instead
2016-06-15 09:48:12.395 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Exit context: `users`
2016-06-15 09:48:13.546 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Exit context: `users`
2016-06-15 09:48:13.546 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation of semantic.
2016-06-15 09:48:13.547 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation.
Task config is valid :)
--------------------------------------------------------------------------------
 Task  137eb997-d1f8-4d3f-918a-8aec3db7500f: started
--------------------------------------------------------------------------------

Benchmarking... This can take a while...

To track task status use:

        rally task status
        or
        rally task detailed

Using task: 137eb997-d1f8-4d3f-918a-8aec3db7500f
2016-06-15 09:48:13.555 101 INFO rally.api [-] Benchmark Task 137eb997-d1f8-4d3f-918a-8aec3db7500f on Deployment a5162111-02a5-458f-bb59-f822cab1aa93
2016-06-15 09:48:13.558 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Benchmarking.
2016-06-15 09:48:13.586 101 INFO rally.task.engine [-] Running benchmark with key:
{
  "kw": {
    "runner": {
      "type": "constant",
      "concurrency": 2,
      "times": 10
    },
    "args": {
      "force_delete": false,
      "flavor": {
        "name": "m1.tiny"
      },
      "image": {
        "name": "cirros"
      }
    },
    "context": {
      "users": {
        "users_per_tenant": 2,
        "tenants": 3
      }
    }
  },
  "name": "NovaServers.boot_and_delete_server",
  "pos": 0
}
2016-06-15 09:48:13.592 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Enter context: `users`
2016-06-15 09:48:14.994 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Enter context: `users`
2016-06-15 09:48:15.244 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 0 START
2016-06-15 09:48:15.245 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 1 START
2016-06-15 09:48:16.975 292 WARNING rally.common.logging [-] 'wait_for' is deprecated in Rally v0.1.2: Use wait_for_status instead.
2016-06-15 09:48:17.095 293 WARNING rally.common.logging [-] 'wait_for' is deprecated in Rally v0.1.2: Use wait_for_status instead.
2016-06-15 09:49:21.024 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 0 END: OK
2016-06-15 09:49:21.028 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 2 START
2016-06-15 09:49:32.109 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 1 END: OK
2016-06-15 09:49:32.112 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 3 START
2016-06-15 09:49:41.504 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 2 END: OK
2016-06-15 09:49:41.508 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 4 START
2016-06-15 09:49:52.455 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 3 END: OK
2016-06-15 09:49:52.462 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 5 START
2016-06-15 09:50:01.907 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 4 END: OK
2016-06-15 09:50:01.918 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 6 START
2016-06-15 09:50:12.692 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 5 END: OK
2016-06-15 09:50:12.694 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 7 START
2016-06-15 09:50:23.122 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 6 END: OK
2016-06-15 09:50:23.131 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 8 START
2016-06-15 09:50:33.322 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 7 END: OK
2016-06-15 09:50:33.332 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 9 START
2016-06-15 09:50:43.285 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 8 END: OK
2016-06-15 09:50:53.422 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 9 END: OK
2016-06-15 09:50:53.436 101 INFO rally.plugins.openstack.context.cleanup.user [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  user resources cleanup
2016-06-15 09:50:55.244 101 INFO rally.plugins.openstack.context.cleanup.user [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: user resources cleanup
2016-06-15 09:50:55.245 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Exit context: `users`
2016-06-15 09:50:57.438 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Exit context: `users`
2016-06-15 09:50:58.023 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Benchmarking.

--------------------------------------------------------------------------------
Task 137eb997-d1f8-4d3f-918a-8aec3db7500f: finished
--------------------------------------------------------------------------------

test scenario NovaServers.boot_and_delete_server
args position 0
args values:
{
  "runner": {
    "type": "constant",
    "concurrency": 2,
    "times": 10
  },
  "args": {
    "force_delete": false,
    "flavor": {
      "name": "m1.tiny"
    },
    "image": {
      "name": "cirros"
    }
  },
  "context": {
    "users": {
      "project_domain": "default",
      "users_per_tenant": 2,
      "tenants": 3,
      "resource_management_workers": 30,
      "user_domain": "default"
    }
  }
}

+-----------------------------------------------------------------------------------------------------------------------+
|                                                 Response Times (sec)                                                  |
+--------------------+-----------+--------------+--------------+--------------+-----------+-----------+---------+-------+
| Action             | Min (sec) | Median (sec) | 90%ile (sec) | 95%ile (sec) | Max (sec) | Avg (sec) | Success | Count |
+--------------------+-----------+--------------+--------------+--------------+-----------+-----------+---------+-------+
| nova.boot_server   | 17.84     | 18.158       | 64.433       | 69.419       | 74.405    | 28.299    | 100.0%  | 10    |
| nova.delete_server | 2.24      | 2.275        | 2.454        | 2.456        | 2.458     | 2.317     | 100.0%  | 10    |
| total              | 20.09     | 20.437       | 66.888       | 71.875       | 76.863    | 30.616    | 100.0%  | 10    |
+--------------------+-----------+--------------+--------------+--------------+-----------+-----------+---------+-------+

Load duration: 158.199862003
Full duration: 163.846753836

HINTS:
* To plot HTML graphics with this data, run:
        rally task report 137eb997-d1f8-4d3f-918a-8aec3db7500f --out output.html

* To generate a JUnit report, run:
        rally task report 137eb997-d1f8-4d3f-918a-8aec3db7500f --junit --out output.xml

* To get raw JSON output of task results, run:
        rally task results 137eb997-d1f8-4d3f-918a-8aec3db7500f

After a while, you will receive an output execution resume, you can export to a report file with the following command in a pretty style report.
Use the volume we created with the Docker Host to save report files.

[root@07766ba700e8 /]# rally task report 137eb997-d1f8-4d3f-918a-8aec3db7500f --html-static --out /rally-data/output.html

Open the output file form a Web browser and review the report.
Selection_002
Regards

1 2 3 4 32
%d bloggers like this: