Rally OpenStack benchmarking from Docker containers

OpenStack Rally is a project under the Big Tent umbrella with the mission of verify OpenStack environments to ensure SLAs under high loads or fail over scenarios, and cloud services verification. Rally can also be used to continuous integration and delivery tasks.

Why use Rally inside a Docker container? Rally is a service that is not commonly used in most environments, is a tool that is used when new infrastructure changes are made or when a SLAs review must be done, not make any sense have a service consuming infrastructure resources or block a server only for use under specific situations. Also, if your OpenStack infrastructure is automated, with a container you can have a nice integration with CI/CD tools like Jenkins.



Main reasons to use Rally inside Docker containers:

  • Quick tests/deployments of Rally tasks
  • Automated testing
  • Cost savings
  • Operators can execute tasks with their own computers, freeing infrastructure resources
  • Re-utilization of resources


Here you got my suggestions about how to use Rally inside Docker:

  • Create a new container(automatized or not by another tool)
  • Always use an external volume to store rally reports data
  • Execute Rally tasks
  • Export the reports to the volume shared with the Docker host
  • Kill the container


Let’s start with this quick guide:



Clone the repo i created with the Dockerfile

[egonzalez@localhost ~]$ git clone https://github.com/egonzalez90/docker-rally.git

Move to docker-rally directory

[egonzalez@localhost ~]$ cd docker-rally/

Create the Docker image

[egonzalez@localhost docker-rally]$ docker build -t egonzalez90/rally-mitaka .
Sending build context to Docker daemon  76.8 kB
Step 1 : FROM centos:7
 ---> 904d6c400333
Step 2 : MAINTAINER Eduardo Gonzalez Gutierrez <dabarren@gmail.com>
 ---> Using cache
 ---> ee93bc7747e1
Step 3 : RUN yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-mitaka/rdo-release-mitaka-3.noarch.rpm
 ---> Using cache
 ---> 8492ab9ee261
Step 4 : RUN yum update -y
 ---> Using cache
 ---> 1374340eb39a
Step 5 : RUN yum -y install         openstack-rally         gcc         libffi-devel         python-devel         openssl-devel         gmp-devel         libxml2-devel         libxslt-devel         postgresql-devel         redhat-rpm-config         wget         openstack-selinux         openstack-utils &&         yum clean all
 ---> Using cache
 ---> 9b65e4a281be
Step 6 : RUN rally-manage --config-file /etc/rally/rally.conf db recreate
 ---> Using cache
 ---> dc4f3dbc1505
Successfully built dc4f3dbc1505

Start rally container with a pseudo-tty and a volume to store rally execution data

[egonzalez@localhost docker-rally]$ docker run -ti -v /opt/rally-data/:/rally-data:Z egonzalez90/rally-mitaka
[root@07766ba700e8 /]# 

Create a file called deploy.json with the admin info of your OpenStack environment

[root@07766ba700e8 /]# vi deploy.json

{
    "type": "ExistingCloud",
    "auth_url": "http://controller:5000/v2.0",
    "region_name": "RegionOne",
    "admin": {
        "username": "admin",
        "password": "my_password",
        "tenant_name": "admin"
    }
}

Create a deployment with the json we previously created

[root@07766ba700e8 /]# rally deployment create --file=deploy.json --name=existing
2016-06-15 09:42:25.428 25 INFO rally.deployment.engine [-] Deployment a5162111-02a5-458f-bb59-f822cab1aa93 | Starting:  OpenStack cloud deployment.
2016-06-15 09:42:25.478 25 INFO rally.deployment.engine [-] Deployment a5162111-02a5-458f-bb59-f822cab1aa93 | Completed: OpenStack cloud deployment.
+--------------------------------------+----------------------------+----------+------------------+--------+
| uuid                                 | created_at                 | name     | status           | active |
+--------------------------------------+----------------------------+----------+------------------+--------+
| a5162111-02a5-458f-bb59-f822cab1aa93 | 2016-06-15 09:42:25.391691 | existing | deploy->finished |        |
+--------------------------------------+----------------------------+----------+------------------+--------+
Using deployment: a5162111-02a5-458f-bb59-f822cab1aa93
~/.rally/openrc was updated

HINTS:
* To get your cloud resources, run:
        rally show [flavors|images|keypairs|networks|secgroups]

* To use standard OpenStack clients, set up your env by running:
        source ~/.rally/openrc
  OpenStack clients are now configured, e.g run:
        glance image-list

Source the openrc file rally has created with your user info and test if you can connect with glance

[root@07766ba700e8 /]# source ~/.rally/openrc

[root@07766ba700e8 /]# glance  image-list
+--------------------------------------+--------+
| ID                                   | Name   |
+--------------------------------------+--------+
| 1c4fc8a6-3ea7-433c-8ece-a14bbaf861e2 | cirros |
+--------------------------------------+--------+

Check deployment status

[root@07766ba700e8 /]# rally deployment check
keystone endpoints are valid and following services are available:
+-------------+----------------+-----------+
| services    | type           | status    |
+-------------+----------------+-----------+
| __unknown__ | volumev2       | Available |
| ceilometer  | metering       | Available |
| cinder      | volume         | Available |
| cloud       | cloudformation | Available |
| glance      | image          | Available |
| heat        | orchestration  | Available |
| keystone    | identity       | Available |
| neutron     | network        | Available |
| nova        | compute        | Available |
+-------------+----------------+-----------+
NOTE: '__unknown__' service name means that Keystone service catalog doesn't return name for this service and Rally can not identify service by its type. BUT you still can use such services with api_versions context, specifying type of service (execute `rally plugin show api_versions` for more details).

Create a test execution file, this test will check if nova can boot and delete some instances

[root@07766ba700e8 /]# vi execution.json

{
  "NovaServers.boot_and_delete_server": [
    {
      "runner": {
        "type": "constant", 
        "concurrency": 2, 
        "times": 10
      }, 
      "args": {
        "force_delete": false, 
        "flavor": {
          "name": "m1.tiny"
        }, 
        "image": {
          "name": "cirros"
        }
      }, 
      "context": {
        "users": {
          "project_domain": "default", 
          "users_per_tenant": 2, 
          "tenants": 3, 
          "resource_management_workers": 30, 
          "user_domain": "default"
        }
      }
    }
  ]
}

Run the task with the following command

[root@07766ba700e8 /]# rally task start execution.json
--------------------------------------------------------------------------------
 Preparing input task
--------------------------------------------------------------------------------

Input task is:
{
    "NovaServers.boot_and_delete_server": [
        {
            "args": {
                "flavor": {
                    "name": "m1.tiny"
                },
                "image": {
                    "name": "cirros"
                },
                "force_delete": false
            },
            "runner": {
                "type": "constant",
                "times": 10,
                "concurrency": 2
            },
            "context": {
                "users": {
                    "tenants": 3,
                    "users_per_tenant": 2
                }
            }
        }
    ]
}

Task syntax is correct :)
2016-06-15 09:48:11.556 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation.
2016-06-15 09:48:11.579 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation of scenarios names.
2016-06-15 09:48:11.581 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation of scenarios names.
2016-06-15 09:48:11.581 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation of syntax.
2016-06-15 09:48:11.587 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation of syntax.
2016-06-15 09:48:11.588 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation of semantic.
2016-06-15 09:48:11.588 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Task validation check cloud.
2016-06-15 09:48:11.694 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation check cloud.
2016-06-15 09:48:11.700 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Enter context: `users`
2016-06-15 09:48:12.004 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Enter context: `users`
2016-06-15 09:48:12.106 101 WARNING rally.task.types [-] FlavorResourceType is deprecated in Rally v0.3.2; use the equivalent resource plugin name instead
2016-06-15 09:48:12.207 101 WARNING rally.task.types [-] ImageResourceType is deprecated in Rally v0.3.2; use the equivalent resource plugin name instead
2016-06-15 09:48:12.395 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Exit context: `users`
2016-06-15 09:48:13.546 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Exit context: `users`
2016-06-15 09:48:13.546 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation of semantic.
2016-06-15 09:48:13.547 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Task validation.
Task config is valid :)
--------------------------------------------------------------------------------
 Task  137eb997-d1f8-4d3f-918a-8aec3db7500f: started
--------------------------------------------------------------------------------

Benchmarking... This can take a while...

To track task status use:

        rally task status
        or
        rally task detailed

Using task: 137eb997-d1f8-4d3f-918a-8aec3db7500f
2016-06-15 09:48:13.555 101 INFO rally.api [-] Benchmark Task 137eb997-d1f8-4d3f-918a-8aec3db7500f on Deployment a5162111-02a5-458f-bb59-f822cab1aa93
2016-06-15 09:48:13.558 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Benchmarking.
2016-06-15 09:48:13.586 101 INFO rally.task.engine [-] Running benchmark with key:
{
  "kw": {
    "runner": {
      "type": "constant",
      "concurrency": 2,
      "times": 10
    },
    "args": {
      "force_delete": false,
      "flavor": {
        "name": "m1.tiny"
      },
      "image": {
        "name": "cirros"
      }
    },
    "context": {
      "users": {
        "users_per_tenant": 2,
        "tenants": 3
      }
    }
  },
  "name": "NovaServers.boot_and_delete_server",
  "pos": 0
}
2016-06-15 09:48:13.592 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Enter context: `users`
2016-06-15 09:48:14.994 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Enter context: `users`
2016-06-15 09:48:15.244 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 0 START
2016-06-15 09:48:15.245 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 1 START
2016-06-15 09:48:16.975 292 WARNING rally.common.logging [-] 'wait_for' is deprecated in Rally v0.1.2: Use wait_for_status instead.
2016-06-15 09:48:17.095 293 WARNING rally.common.logging [-] 'wait_for' is deprecated in Rally v0.1.2: Use wait_for_status instead.
2016-06-15 09:49:21.024 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 0 END: OK
2016-06-15 09:49:21.028 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 2 START
2016-06-15 09:49:32.109 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 1 END: OK
2016-06-15 09:49:32.112 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 3 START
2016-06-15 09:49:41.504 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 2 END: OK
2016-06-15 09:49:41.508 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 4 START
2016-06-15 09:49:52.455 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 3 END: OK
2016-06-15 09:49:52.462 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 5 START
2016-06-15 09:50:01.907 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 4 END: OK
2016-06-15 09:50:01.918 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 6 START
2016-06-15 09:50:12.692 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 5 END: OK
2016-06-15 09:50:12.694 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 7 START
2016-06-15 09:50:23.122 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 6 END: OK
2016-06-15 09:50:23.131 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 8 START
2016-06-15 09:50:33.322 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 7 END: OK
2016-06-15 09:50:33.332 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 9 START
2016-06-15 09:50:43.285 292 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 8 END: OK
2016-06-15 09:50:53.422 293 INFO rally.task.runner [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | ITER: 9 END: OK
2016-06-15 09:50:53.436 101 INFO rally.plugins.openstack.context.cleanup.user [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  user resources cleanup
2016-06-15 09:50:55.244 101 INFO rally.plugins.openstack.context.cleanup.user [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: user resources cleanup
2016-06-15 09:50:55.245 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Starting:  Exit context: `users`
2016-06-15 09:50:57.438 101 INFO rally.plugins.openstack.context.keystone.users [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Exit context: `users`
2016-06-15 09:50:58.023 101 INFO rally.task.engine [-] Task 137eb997-d1f8-4d3f-918a-8aec3db7500f | Completed: Benchmarking.

--------------------------------------------------------------------------------
Task 137eb997-d1f8-4d3f-918a-8aec3db7500f: finished
--------------------------------------------------------------------------------

test scenario NovaServers.boot_and_delete_server
args position 0
args values:
{
  "runner": {
    "type": "constant",
    "concurrency": 2,
    "times": 10
  },
  "args": {
    "force_delete": false,
    "flavor": {
      "name": "m1.tiny"
    },
    "image": {
      "name": "cirros"
    }
  },
  "context": {
    "users": {
      "project_domain": "default",
      "users_per_tenant": 2,
      "tenants": 3,
      "resource_management_workers": 30,
      "user_domain": "default"
    }
  }
}

+-----------------------------------------------------------------------------------------------------------------------+
|                                                 Response Times (sec)                                                  |
+--------------------+-----------+--------------+--------------+--------------+-----------+-----------+---------+-------+
| Action             | Min (sec) | Median (sec) | 90%ile (sec) | 95%ile (sec) | Max (sec) | Avg (sec) | Success | Count |
+--------------------+-----------+--------------+--------------+--------------+-----------+-----------+---------+-------+
| nova.boot_server   | 17.84     | 18.158       | 64.433       | 69.419       | 74.405    | 28.299    | 100.0%  | 10    |
| nova.delete_server | 2.24      | 2.275        | 2.454        | 2.456        | 2.458     | 2.317     | 100.0%  | 10    |
| total              | 20.09     | 20.437       | 66.888       | 71.875       | 76.863    | 30.616    | 100.0%  | 10    |
+--------------------+-----------+--------------+--------------+--------------+-----------+-----------+---------+-------+

Load duration: 158.199862003
Full duration: 163.846753836

HINTS:
* To plot HTML graphics with this data, run:
        rally task report 137eb997-d1f8-4d3f-918a-8aec3db7500f --out output.html

* To generate a JUnit report, run:
        rally task report 137eb997-d1f8-4d3f-918a-8aec3db7500f --junit --out output.xml

* To get raw JSON output of task results, run:
        rally task results 137eb997-d1f8-4d3f-918a-8aec3db7500f

After a while, you will receive an output execution resume, you can export to a report file with the following command in a pretty style report.
Use the volume we created with the Docker Host to save report files.

[root@07766ba700e8 /]# rally task report 137eb997-d1f8-4d3f-918a-8aec3db7500f --html-static --out /rally-data/output.html

Open the output file form a Web browser and review the report.
Selection_002
Regards

OpenStack Kolla deployment from RDO packages

OpenStack, Ansible, Docker, production ready, HA, etc. Nothing can be so interesting as Kolla.
Kolla includes all you need to create, maintain and operate an OpenStack environment.
All the services will be installed along the nodes you specify inside docker containers with high availability and load balancing between services by default, you don’t need to care about an external tool for these purposes.
In future posts, i will talk in more detail about Kolla and how works, also more tips or deployment types. For now, go to the official documentation.
At this demo, i will use:

  • x1 Deployment node: Laptop with 12GB of RAM and a single CPU
  • x3 Target nodes: VMs with 24GB of RAM and 2 vCPU each one.
  • All nodes connected to a shared connection with 300Mbs

ALL NODES

Before deploy OpenStack with Kolla, we need to ensure all the nodes got time synchronized.

yum -y install ntp
systemctl enable ntpd.service
systemctl start ntpd.service

Next, stop and disable libvirt service to avoid conflicts with libvirt containers.

systemctl stop libvirtd
systemctl disable libvirtd

Install docker

curl -sSL https://get.docker.io | bash

Add the user you are using to docker group so this user can issue docker commands without sudo. Logoff and login to apply changes.

sudo usermod -aG docker root

Create a file called kolla.conf with the following content.

vi /etc/systemd/system/docker.service.d/kolla.conf
[Service]
MountFlags=shared

Restart and enable docker service

systemctl restart docker
systemctl enable docker

Install some packages who are needed by next steps.

yum install -y python-devel libffi-devel openssl-devel gcc git python-pip python-openstackclient

DEPLOY NODE

Install EPEL repository

yum install -y epel-release

Install ansible

yum install -y ansible

Clone Kolla mitaka/stable code.

git clone https://git.openstack.org/openstack/kolla -b stable/mitaka

Install kolla and dependencies.

pip install kolla/

Copy kolla configuration files to /etc/

cd kolla
cp -r etc/kolla /etc/

Create kolla build config file

pip install tox
tox -e genconfig

Edit kolla-build file with the following content

vi /etc/kolla/kolla-build.conf 

base = centos
base_tag = mitaka
push = true
install_type = rdo
registry = docker.io

Login with your DockerHub account, sometimes, login doesn’t works as expected. Review auth url at authentication file in ~/.docker/ directory. After Austin Summit i will post exact changes i made in the URL.

docker login

Create and push the images to your DockerHub account.
If images are not automatically pushed to the remote repository, push them manually once image creation finished.
Building images can last various hours, in my experience sometimes were built in 3 hours and another times in 9 hours. And much more if you are going to push them to your DockerHub instead of a private registry.

kolla-build -n egonzalez90 --push

Review all docker images kolla has created.

[egonzalez@localhost kolla]$ docker images | grep mitaka
egonzalez90/centos-binary-cinder-api                  mitaka              ba2cca4b09fa        16 hours ago        814.5 MB
egonzalez90/centos-binary-cinder-volume               mitaka              1d31a049f327        16 hours ago        802.4 MB
egonzalez90/centos-binary-cinder-rpcbind              mitaka              5f7bc909f41b        16 hours ago        804.2 MB
egonzalez90/centos-binary-mesos-slave                 mitaka              57a0e00d1901        16 hours ago        651.6 MB
egonzalez90/centos-binary-swift-rsyncd                mitaka              36f5b9c9d4c5        16 hours ago        565.3 MB
egonzalez90/centos-binary-cinder-backup               mitaka              a7a8161398fe        16 hours ago        775.3 MB
egonzalez90/centos-binary-cinder-scheduler            mitaka              a5c5b79a25f6        16 hours ago        775.3 MB
egonzalez90/centos-binary-marathon                    mitaka              704ce8261a7f        16 hours ago        770.4 MB
egonzalez90/centos-binary-chronos                     mitaka              974525562cea        16 hours ago        732.8 MB
egonzalez90/centos-binary-swift-object                mitaka              e09b529bad32        16 hours ago        582.9 MB
egonzalez90/centos-binary-swift-account               mitaka              573b8e5bd3c7        16 hours ago        582.9 MB
egonzalez90/centos-binary-swift-container             mitaka              c63d9a5be014        16 hours ago        583.2 MB
egonzalez90/centos-binary-mesos-master                mitaka              2610881df9c0        16 hours ago        536.8 MB
egonzalez90/centos-binary-swift-proxy-server          mitaka              3632ee65ace9        16 hours ago        584.7 MB
egonzalez90/centos-binary-ceilometer-api              mitaka              808cd12e9287        16 hours ago        598.6 MB
egonzalez90/centos-binary-ceilometer-compute          mitaka              59e7a5e3bd79        16 hours ago        612.6 MB
egonzalez90/centos-binary-ceilometer-central          mitaka              de094dabf9fd        16 hours ago        612.6 MB
egonzalez90/centos-binary-magnum-api                  mitaka              6ce41a1856f8        16 hours ago        690 MB
egonzalez90/centos-binary-glance-api                  mitaka              2a1c8702341a        16 hours ago        688.5 MB
egonzalez90/centos-binary-ceilometer-notification     mitaka              7ccb484383ae        16 hours ago        594 MB
egonzalez90/centos-binary-ceilometer-collector        mitaka              c2e043f6e2b1        16 hours ago        595.4 MB
egonzalez90/centos-binary-magnum-conductor            mitaka              19674f37dc9b        16 hours ago        790.8 MB
egonzalez90/centos-binary-aodh-api                    mitaka              c35c48dee3c4        16 hours ago        593.2 MB
egonzalez90/centos-binary-glance-registry             mitaka              a72949aaaf45        16 hours ago        688.5 MB
egonzalez90/centos-binary-aodh-expirer                mitaka              ffa9bc296a02        16 hours ago        593.2 MB
egonzalez90/centos-binary-aodh-evaluator              mitaka              c214eac9bbd9        16 hours ago        593.2 MB
egonzalez90/centos-binary-neutron-metadata-agent      mitaka              0cea7ba50b8e        16 hours ago        817.9 MB
egonzalez90/centos-binary-aodh-listener               mitaka              c5d255b20d4e        16 hours ago        593.2 MB
egonzalez90/centos-binary-aodh-notifier               mitaka              dbd4c8d5515d        16 hours ago        593.2 MB
egonzalez90/centos-binary-neutron-server              mitaka              688d6800684b        16 hours ago        817.9 MB
egonzalez90/centos-binary-gnocchi-api                 mitaka              5f8daeb7a511        17 hours ago        840.8 MB
egonzalez90/centos-binary-neutron-openvswitch-agent   mitaka              3c2f03d388fa        17 hours ago        843.4 MB
egonzalez90/centos-binary-nova-compute                mitaka              aef19eb18b41        17 hours ago        1.076 GB
egonzalez90/centos-binary-neutron-linuxbridge-agent   mitaka              672550e296af        17 hours ago        843.1 MB
egonzalez90/centos-binary-nova-libvirt                mitaka              46cd6d68a29d        17 hours ago        1.127 GB
egonzalez90/centos-binary-gnocchi-statsd              mitaka              8369b97d0fb7        17 hours ago        840.7 MB
egonzalez90/centos-binary-neutron-dhcp-agent          mitaka              b6a6de5c4d3f        17 hours ago        817.9 MB
egonzalez90/centos-binary-neutron-l3-agent            mitaka              6d4956cd63e6        17 hours ago        817.9 MB
egonzalez90/centos-binary-nova-spicehtml5proxy        mitaka              6db500ef18b0        17 hours ago        629.5 MB
egonzalez90/centos-binary-nova-compute-ironic         mitaka              89f4f8ba32b9        17 hours ago        1.04 GB
egonzalez90/centos-binary-nova-conductor              mitaka              71e00696b65a        17 hours ago        629.4 MB
egonzalez90/centos-binary-nova-novncproxy             mitaka              4153ed5cdfa5        17 hours ago        630 MB
egonzalez90/centos-binary-nova-api                    mitaka              7bf702527a50        17 hours ago        629.4 MB
egonzalez90/centos-binary-nova-ssh                    mitaka              0c71e10ba8bb        17 hours ago        630.4 MB
egonzalez90/centos-binary-nova-network                mitaka              ff2ed3dc65ab        17 hours ago        630.4 MB
egonzalez90/centos-binary-heat-api                    mitaka              3f3bac2b91b4        17 hours ago        592.2 MB
egonzalez90/centos-binary-nova-consoleauth            mitaka              f7f558ed3061        17 hours ago        629.5 MB
egonzalez90/centos-binary-nova-scheduler              mitaka              f9b8750d4812        17 hours ago        629.4 MB
egonzalez90/centos-binary-heat-engine                 mitaka              69b416b2481c        17 hours ago        592.2 MB
egonzalez90/centos-binary-heat-api-cfn                mitaka              220acaf5f692        18 hours ago        592.2 MB
egonzalez90/centos-binary-manila-api                  mitaka              3e21270b4e91        18 hours ago        588.4 MB
egonzalez90/centos-binary-trove-api                   mitaka              68868b718307        18 hours ago        585.8 MB
egonzalez90/centos-binary-manila-share                mitaka              45e069ec5233        18 hours ago        637.8 MB
egonzalez90/centos-binary-trove-guestagent            mitaka              484a9b5b5631        18 hours ago        586.1 MB
egonzalez90/centos-binary-trove-conductor             mitaka              2817941fed43        18 hours ago        585.8 MB
egonzalez90/centos-binary-trove-taskmanager           mitaka              16fc85e299a1        18 hours ago        585.8 MB
egonzalez90/centos-binary-manila-scheduler            mitaka              075beb4c058e        18 hours ago        588.4 MB
egonzalez90/centos-binary-designate-api               mitaka              0dfb2e4b971d        18 hours ago        589.8 MB
egonzalez90/centos-binary-designate-central           mitaka              d4ab5d846989        18 hours ago        589.8 MB
egonzalez90/centos-binary-designate-poolmanager       mitaka              17570055aa01        18 hours ago        594.3 MB
egonzalez90/centos-binary-designate-sink              mitaka              16e1113010dd        18 hours ago        589.8 MB
egonzalez90/centos-binary-designate-backend-bind9     mitaka              a83d15642a07        18 hours ago        594.3 MB
egonzalez90/centos-binary-cinder-base                 mitaka              ebc196468197        18 hours ago        775.3 MB
egonzalez90/centos-binary-ironic-pxe                  mitaka              3b825ca5e758        18 hours ago        595.2 MB
egonzalez90/centos-binary-ironic-api                  mitaka              53b3a144266a        18 hours ago        591.6 MB
egonzalez90/centos-binary-zookeeper                   mitaka              91270c923346        18 hours ago        544.8 MB
egonzalez90/centos-binary-designate-mdns              mitaka              2de6dfb55068        18 hours ago        589.8 MB
egonzalez90/centos-binary-ironic-inspector            mitaka              631d5c362116        18 hours ago        597.4 MB
egonzalez90/centos-binary-ironic-conductor            mitaka              aceccff4bef0        18 hours ago        620.3 MB
egonzalez90/centos-binary-horizon                     mitaka              b8a5f7db8daf        18 hours ago        690.6 MB
egonzalez90/centos-binary-swift-base                  mitaka              c98164063b84        18 hours ago        563.7 MB
egonzalez90/centos-binary-mesos-base                  mitaka              a50e0e1e8edc        18 hours ago        536.5 MB
egonzalez90/centos-binary-ceilometer-base             mitaka              07164b2054b8        18 hours ago        574.2 MB
egonzalez90/centos-binary-glance-base                 mitaka              b40e34f047d7        18 hours ago        688.5 MB
egonzalez90/centos-binary-magnum-base                 mitaka              bad9157e57ba        18 hours ago        668.3 MB
egonzalez90/centos-binary-aodh-base                   mitaka              9a919ceb1213        19 hours ago        573.5 MB
egonzalez90/centos-binary-neutron-base                mitaka              7669e9646a22        19 hours ago        817.9 MB
egonzalez90/centos-binary-gnocchi-base                mitaka              509a5c7395fb        19 hours ago        817.5 MB
egonzalez90/centos-binary-keystone                    mitaka              231990ed7b4d        19 hours ago        606.4 MB
egonzalez90/centos-binary-nova-base                   mitaka              a4523a00e9b2        19 hours ago        608.8 MB
egonzalez90/centos-binary-zaqar                       mitaka              43b8675a9bda        19 hours ago        607.4 MB
egonzalez90/centos-binary-heat-base                   mitaka              10662065592f        19 hours ago        572.6 MB
egonzalez90/centos-binary-manila-base                 mitaka              215fc8275580        19 hours ago        588.4 MB
egonzalez90/centos-binary-trove-base                  mitaka              0eda6621a5c3        19 hours ago        566.5 MB
egonzalez90/centos-binary-designate-base              mitaka              dc53110d609c        19 hours ago        570.2 MB
egonzalez90/centos-binary-dind                        mitaka              f2e7bbe028b4        19 hours ago        539.3 MB
egonzalez90/centos-binary-tempest                     mitaka              28cceef2319d        19 hours ago        628 MB
egonzalez90/centos-binary-ironic-base                 mitaka              7b52957bf3a0        19 hours ago        572 MB
egonzalez90/centos-binary-openvswitch-db-server       mitaka              a624dd2d260d        19 hours ago        379 MB
egonzalez90/centos-binary-openvswitch-vswitchd        mitaka              4c36af8e0e44        20 hours ago        379 MB
egonzalez90/centos-binary-ceph-mon                    mitaka              81486c6a7605        20 hours ago        553.3 MB
egonzalez90/centos-binary-kolla-toolbox               mitaka              3fc4535c3d5e        20 hours ago        675.4 MB
egonzalez90/centos-binary-elasticsearch               mitaka              0a81ba71ec7f        20 hours ago        576.4 MB
egonzalez90/centos-binary-keepalived                  mitaka              3559905c7d86        20 hours ago        409.3 MB
egonzalez90/centos-binary-ceph-osd                    mitaka              26dc5c40e160        20 hours ago        553.3 MB
egonzalez90/centos-binary-heka                        mitaka              919dd5a93ca3        20 hours ago        420.6 MB
egonzalez90/centos-binary-rabbitmq                    mitaka              4ab020955a66        20 hours ago        552.7 MB
egonzalez90/centos-binary-mesosphere-base             mitaka              a9f2a4c7cf1c        20 hours ago        381.9 MB
egonzalez90/centos-binary-openstack-base              mitaka              46a527edf49a        20 hours ago        539.3 MB
egonzalez90/centos-binary-ceph-rgw                    mitaka              f57ab1371bd3        20 hours ago        553.3 MB
egonzalez90/centos-binary-openvswitch-base            mitaka              f91c5a909b2c        20 hours ago        379 MB
egonzalez90/centos-binary-mariadb                     mitaka              8fe89c13a637        20 hours ago        678.6 MB
egonzalez90/centos-binary-cron                        mitaka              a239ea240c2e        20 hours ago        366.7 MB
egonzalez90/centos-binary-mongodb                     mitaka              48946c962d7e        20 hours ago        539.2 MB
egonzalez90/centos-binary-ceph-base                   mitaka              02be30a43c6e        20 hours ago        553.3 MB
egonzalez90/centos-binary-haproxy                     mitaka              b8d8ac3e371d        20 hours ago        367.4 MB
egonzalez90/centos-binary-memcached                   mitaka              175026eb6466        20 hours ago        404.1 MB
egonzalez90/centos-binary-kibana                      mitaka              885aeb0b2b97        20 hours ago        490.9 MB
egonzalez90/centos-binary-mesos-dns                   mitaka              95e29f8429e7        21 hours ago        361 MB
egonzalez90/centos-binary-base                        mitaka              b104d01004c6        21 hours ago        349.2 MB

TARGET HOSTS

In target nodes, a newer version of pip and docker-py is needed, install it.

sudo pip install -U pip
pip install -U docker-py

DEPLOY KOLLA

Kolla ships a tool to create random passwords, issue this command to run this tool. Also, you can modify passwords file at /etc/kolla/ directory.

kolla-genpwd

Edit globals.yml file with the following content, use your own info if necessary.
Change docker_namespace with your docker account name.

vi /etc/kolla/globals.yml

kolla_base_distro: "centos"
kolla_install_type: "binary"
openstack_release: "mitaka" ## Tag at docker hub
kolla_internal_vip_address: "192.168.1.90"
docker_registry: "docker.io"
docker_namespace: "egonzalez90"
network_interface: "eth2"
neutron_external_interface: "ens9"

Edit the inventory file with your server’s IPs or hostnames.

vi ansible/inventory/multinode

[control]
# These hostname must be resolvable from your deployment host
192.168.1.77
192.168.1.74
192.168.1.78

# The network nodes are where your l3-agent and loadbalancers will run
# This can be the same as a host in the control group
[network]
192.168.1.77
192.168.1.74
192.168.1.78

[compute]
192.168.1.77
192.168.1.74
192.168.1.78

# When compute nodes and control nodes use different interfaces,
# you can specify "api_interface" and another interfaces like below:
#compute01 neutron_external_interface=eth0 api_interface=em1 storage_interface=em1 tunnel_interface=em1

[storage]
192.168.1.77
192.168.1.74
192.168.1.78

Create an SSH key to login into target servers.

[root@kolla-deployment-node kolla]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
bd:3e:ce:7c:2a:6b:a7:99:ed:04:cf:c2:60:5f:2f:12 root@kolla-deployment-node
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|         .       |
|      o E o      |
|     . + * o     |
|        = * .    |
|        o@o..    |
|       .=BO+     |
+-----------------+

Copy the SSH key you have previously created to all your target nodes.

[root@kolla-deployment-node kolla]# ssh-copy-id root@192.168.1.77
[root@kolla-deployment-node kolla]# ssh-copy-id root@192.168.1.74
[root@kolla-deployment-node kolla]# ssh-copy-id root@192.168.1.78

Ensure all hostnames can be resolved between all the nodes, this is a necessary step, if not, rabbitmq will fail.
If using a DNS server you can skip this task.
Configure hosts file.

vi /etc/hosts

192.168.1.77 node1
192.168.1.74 node2
192.168.1.78 node3

Copy hosts file to the other nodes.

scp /etc/hosts root@node2:/etc/hosts
scp /etc/hosts root@node3:/etc/hosts

Execute the prechecks tool to ensure all requisites are ok.

[root@kolla-deployment-node kolla]# kolla-ansible prechecks -i ansible/inventory/multinode 
Pre-deployment checking : ansible-playbook -i ansible/inventory/multinode -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml -e CONFIG_DIR=/etc/kolla  /usr/share/kolla/ansible/prechecks.yml 

PLAY [all] ******************************************************************** 

GATHERING FACTS *************************************************************** 
ok: [192.168.1.77]
ok: [192.168.1.74]
ok: [192.168.1.78]
.......................
PLAY RECAP ******************************************************************** 
192.168.1.74               : ok=63   changed=0    unreachable=0    failed=0   
192.168.1.77               : ok=63   changed=0    unreachable=0    failed=0   
192.168.1.78               : ok=63   changed=0    unreachable=0    failed=0   

Once all requistes are passed, start the installation of OpenStack by Kolla.
The first time usually take a long time, because docker images need to be pulled into target hosts, and more if pull comes from DockerHub registry instead of a local one.

[root@kolla-deployment-node kolla]# kolla-ansible deploy -i ansible/inventory/multinode
Deploying Playbooks : ansible-playbook -i ansible/inventory/multinode -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml -e CONFIG_DIR=/etc/kolla  -e action=deploy /usr/share/kolla/ansible/site.yml 

PLAY [ceph-mon;ceph-osd;ceph-rgw] ********************************************* 

GATHERING FACTS *************************************************************** 
ok: [192.168.1.77]
ok: [192.168.1.74]
ok: [192.168.1.78]

TASK: [common | Ensuring config directories exist] **************************** 
skipping: [192.168.1.77] => (item=heka)
skipping: [192.168.1.74] => (item=heka)
skipping: [192.168.1.77] => (item=cron)
skipping: [192.168.1.78] => (item=heka)
skipping: [192.168.1.74] => (item=cron)
skipping: [192.168.1.77] => (item=cron/logrotate)
skipping: [192.168.1.74] => (item=cron/logrotate)
skipping: [192.168.1.78] => (item=cron)
skipping: [192.168.1.78] => (item=cron/logrotate)

.......................

PLAY RECAP ******************************************************************** 
192.168.1.74               : ok=301  changed=93   unreachable=0    failed=0   
192.168.1.77               : ok=301  changed=93   unreachable=0    failed=0   
192.168.1.78               : ok=301  changed=93   unreachable=0    failed=0   

Execute this tool to create a credential file.

[root@kolla-deployment-node kolla]# kolla-ansible post-deploy

Post-Deploying Playbooks : ansible-playbook -i /usr/share/kolla/ansible/inventory/all-in-one -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml -e CONFIG_DIR=/etc/kolla  /usr/share/kolla/ansible/post-deploy.yml 

PLAY [Creating admin openrc file on the deploy node] ************************** 

GATHERING FACTS *************************************************************** 
ok: [localhost]

TASK: [template ] ************************************************************* 
changed: [localhost]

PLAY RECAP ******************************************************************** 
localhost                  : ok=2    changed=1    unreachable=0    failed=0   

Source credential file.

[root@kolla-deployment-node kolla]# source /etc/kolla/admin-openrc.sh

Kolla ships a tool to create a base Openstack configuration layout, this will create networks, routers, images, etc.
Execute it in the newly OpenStack environment.

[root@kolla-deployment-node kolla]# tools/init-runonce
Downloading glance image.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 12.6M  100 12.6M    0     0   873k      0  0:00:14  0:00:14 --:--:-- 1823k
Creating glance image.
[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6     |
| container_format | bare                                 |
| created_at       | 2016-04-15T19:41:20.000000           |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 0b5ec320-ace9-4b34-93cb-54fa6f2c70f5 |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros                               |
| owner            | a9c2e6c6a55b40619d4f12f05aea03f1     |
| protected        | False                                |
| size             | 13287936                             |
| status           | active                               |
| updated_at       | 2016-04-15T19:42:35.000000           |
| virtual_size     | None                                 |
+------------------+--------------------------------------+
Configuring neutron.
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2016-04-15T19:43:07                  |
| description               |                                      |
| id                        | 12c74cdb-9218-4d8b-ab24-d5bc7f17d8c5 |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| is_default                | False                                |
| mtu                       | 1500                                 |
| name                      | public1                              |
| provider:network_type     | flat                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  |                                      |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| tenant_id                 | a9c2e6c6a55b40619d4f12f05aea03f1     |
| updated_at                | 2016-04-15T19:43:07                  |
+---------------------------+--------------------------------------+
Created a new subnet:
+-------------------+----------------------------------------------+
| Field             | Value                                        |
+-------------------+----------------------------------------------+
| allocation_pools  | {"start": "10.0.2.150", "end": "10.0.2.199"} |
| cidr              | 10.0.2.0/24                                  |
| created_at        | 2016-04-15T19:43:47                          |
| description       |                                              |
| dns_nameservers   |                                              |
| enable_dhcp       | False                                        |
| gateway_ip        | 10.0.2.1                                     |
| host_routes       |                                              |
| id                | 274bee58-68bb-4a96-bae5-41c03022a363         |
| ip_version        | 4                                            |
| ipv6_address_mode |                                              |
| ipv6_ra_mode      |                                              |
| name              | 1-subnet                                     |
| network_id        | 12c74cdb-9218-4d8b-ab24-d5bc7f17d8c5         |
| subnetpool_id     |                                              |
| tenant_id         | a9c2e6c6a55b40619d4f12f05aea03f1             |
| updated_at        | 2016-04-15T19:43:47                          |
+-------------------+----------------------------------------------+
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2016-04-15T19:44:42                  |
| description               |                                      |
| id                        | 9bb7cca0-e7ea-4601-8770-7296473bdfff |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1450                                 |
| name                      | demo-net                             |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 94                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| tenant_id                 | a9c2e6c6a55b40619d4f12f05aea03f1     |
| updated_at                | 2016-04-15T19:44:43                  |
+---------------------------+--------------------------------------+
Created a new subnet:
+-------------------+--------------------------------------------+
| Field             | Value                                      |
+-------------------+--------------------------------------------+
| allocation_pools  | {"start": "10.0.0.2", "end": "10.0.0.254"} |
| cidr              | 10.0.0.0/24                                |
| created_at        | 2016-04-15T19:45:25                        |
| description       |                                            |
| dns_nameservers   | 8.8.8.8                                    |
| enable_dhcp       | True                                       |
| gateway_ip        | 10.0.0.1                                   |
| host_routes       |                                            |
| id                | 28ef0e39-33a4-43ea-b1a6-8ea01d7c3379       |
| ip_version        | 4                                          |
| ipv6_address_mode |                                            |
| ipv6_ra_mode      |                                            |
| name              | demo-subnet                                |
| network_id        | 9bb7cca0-e7ea-4601-8770-7296473bdfff       |
| subnetpool_id     |                                            |
| tenant_id         | a9c2e6c6a55b40619d4f12f05aea03f1           |
| updated_at        | 2016-04-15T19:45:25                        |
+-------------------+--------------------------------------------+
Created a new router:
+-------------------------+--------------------------------------+
| Field                   | Value                                |
+-------------------------+--------------------------------------+
| admin_state_up          | True                                 |
| availability_zone_hints |                                      |
| availability_zones      |                                      |
| description             |                                      |
| distributed             | False                                |
| external_gateway_info   |                                      |
| ha                      | False                                |
| id                      | 53a09f8a-576a-4f83-82b0-995a26f83deb |
| name                    | demo-router                          |
| routes                  |                                      |
| status                  | ACTIVE                               |
| tenant_id               | a9c2e6c6a55b40619d4f12f05aea03f1     |
+-------------------------+--------------------------------------+
Added interface ed81ba4c-0e51-4cd9-9810-0a9b883102c2 to router demo-router.
Set gateway for router demo-router
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| description       |                                      |
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 4f836611-830d-48e7-a81c-7aa65a2573a4 |
| port_range_max    |                                      |
| port_range_min    |                                      |
| protocol          | icmp                                 |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | c9e76d1f-d58c-4621-b402-1295d9e5168d |
| tenant_id         | a9c2e6c6a55b40619d4f12f05aea03f1     |
+-------------------+--------------------------------------+
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| description       |                                      |
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 8cb6c081-0388-4d94-98f8-58190c574133 |
| port_range_max    | 22                                   |
| port_range_min    | 22                                   |
| protocol          | tcp                                  |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | c9e76d1f-d58c-4621-b402-1295d9e5168d |
| tenant_id         | a9c2e6c6a55b40619d4f12f05aea03f1     |
+-------------------+--------------------------------------+
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| description       |                                      |
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 76142824-3cb2-43a5-bbd7-635aedd05666 |
| port_range_max    | 8000                                 |
| port_range_min    | 8000                                 |
| protocol          | tcp                                  |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | c9e76d1f-d58c-4621-b402-1295d9e5168d |
| tenant_id         | a9c2e6c6a55b40619d4f12f05aea03f1     |
+-------------------+--------------------------------------+
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| description       |                                      |
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | ce77b36f-a9ed-4c10-ba1f-2697ad1c8138 |
| port_range_max    | 8080                                 |
| port_range_min    | 8080                                 |
| protocol          | tcp                                  |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | c9e76d1f-d58c-4621-b402-1295d9e5168d |
| tenant_id         | a9c2e6c6a55b40619d4f12f05aea03f1     |
+-------------------+--------------------------------------+
Configuring nova public key and quotas.

Check nova services status

[egonzalez@localhost kolla]$ nova service-list
+----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host  | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
| 40 | nova-consoleauth | node3 | internal | enabled | up    | 2016-04-15T20:15:44.000000 | -               |
| 43 | nova-consoleauth | node1 | internal | enabled | up    | 2016-04-15T20:15:46.000000 | -               |
| 46 | nova-consoleauth | node2 | internal | enabled | up    | 2016-04-15T20:15:48.000000 | -               |
| 49 | nova-scheduler   | node3 | internal | enabled | up    | 2016-04-15T20:15:50.000000 | -               |
| 52 | nova-scheduler   | node2 | internal | enabled | up    | 2016-04-15T20:15:42.000000 | -               |
| 55 | nova-scheduler   | node1 | internal | enabled | up    | 2016-04-15T20:15:43.000000 | -               |
| 58 | nova-conductor   | node1 | internal | enabled | up    | 2016-04-15T20:15:36.000000 | -               |
| 64 | nova-conductor   | node2 | internal | enabled | up    | 2016-04-15T20:15:37.000000 | -               |
| 70 | nova-conductor   | node3 | internal | enabled | up    | 2016-04-15T20:15:35.000000 | -               |
| 79 | nova-compute     | node3 | nova     | enabled | up    | 2016-04-15T20:15:43.000000 | -               |
| 85 | nova-compute     | node2 | nova     | enabled | up    | 2016-04-15T20:15:50.000000 | -               |
| 88 | nova-compute     | node1 | nova     | enabled | up    | 2016-04-15T20:15:51.000000 | -               |
+----+------------------+-------+----------+---------+-------+----------------------------+-----------------+

Check Neutron agents status.

[egonzalez@localhost kolla]$ neutron agent-list
+--------------------------------------+--------------------+-------+-------+----------------+---------------------------+
| id                                   | agent_type         | host  | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+-------+-------+----------------+---------------------------+
| 08d12ccd-74cd-4e8e-9cda-3d3d2e191191 | Metadata agent     | node3 | :-)   | True           | neutron-metadata-agent    |
| 0916aa0e-6d07-4398-99a5-e0e9123cef37 | DHCP agent         | node1 | :-)   | True           | neutron-dhcp-agent        |
| 14707eaf-2d37-4eaf-964a-82b63d1bdc96 | Open vSwitch agent | node3 | :-)   | True           | neutron-openvswitch-agent |
| 265a0acc-e31a-4098-842a-b139e8095056 | L3 agent           | node2 | :-)   | True           | neutron-l3-agent          |
| 50869311-b3bb-4fb3-9676-d1f56d77deb0 | Metadata agent     | node2 | :-)   | True           | neutron-metadata-agent    |
| 5c48b20a-1b57-4e3b-865a-f0f298ea0af8 | DHCP agent         | node2 | :-)   | True           | neutron-dhcp-agent        |
| 89470cc7-6430-45a2-8ee2-852e0ba85cff | Open vSwitch agent | node2 | :-)   | True           | neutron-openvswitch-agent |
| ba689300-c49a-46a7-8c85-e7a6daa5f2cb | DHCP agent         | node3 | :-)   | True           | neutron-dhcp-agent        |
| baadfe87-db69-491b-b7ad-7f16c1468632 | Metadata agent     | node1 | :-)   | True           | neutron-metadata-agent    |
| bc823fff-11a3-4f81-90d5-8f9e4a7a617a | L3 agent           | node3 | :-)   | True           | neutron-l3-agent          |
| d26c860d-e5e3-4da0-b0af-f8ad3a69e9f6 | L3 agent           | node1 | :-)   | True           | neutron-l3-agent          |
| e90277e7-3e46-42d0-a2fd-dce412f503dd | Open vSwitch agent | node1 | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+-------+-------+----------------+---------------------------+

Create a new instance and see what happens.

[egonzalez@localhost kolla]$ openstack server create --image cirros --flavor m1.tiny --nic net-id=demo-net demo-instance

Check how the instance is going.

[egonzalez@localhost kolla]$ openstack server list
+--------------------------------------+---------------+--------+-------------------+
| ID                                   | Name          | Status | Networks          |
+--------------------------------------+---------------+--------+-------------------+
| b234e514-2975-47fd-a618-8ef6aa9ff2bc | demo-instance | ACTIVE | demo-net=10.0.0.3 |
+--------------------------------------+---------------+--------+-------------------+

Thats all for now, in future posts we will see in more detail how Kolla works.

Cheers, Eduardo Gonzalez

Magnum in RDO OpenStack – Liberty Manual Installation from source code

Want to install Magnum (Containers as a Service) in an OpenStack environment based on packages from RDO project?
Here are the steps to do it:

Primary steps are the same as official Magnum guide, major differences come from DevStack or manual installations vs packages from RDO project.
Also, some of the steps are explained to show how Magnum should work, as well this guide can help you understand Magnum integration with your current environment.
I’m not going to use Barbican service for certs management, you will see how to use Magnum without Barbican too.

  • For now, there is not RDO packages for magnum, so we are going to install it from source code.
  • As i know, currently magnum packages are under development and will be added in future OpenStack versions to RDO project packages. (Probably Mitaka or Newton)

Passwords used at this demo are:

  • temporal (Databases and OpenStack users)
  • guest (RabbitMQ)

IPs used are:

  • 192.168.200.208 (Service APIs)
  • 192.168.100.0/24 (External network range)
  • 10.0.0.0/24 (Tenant network range)
  • 8.8.8.8 (Google DNS server)

First we need to install some dependencies and packages needed for next steps.

sudo yum install -y gcc python-setuptools python-devel git libffi-devel openssl-devel wget

Install pip

easy_install pip

Clone Magnum source code from OpenStack git repository, ensure you use Liberty branch, if not, Magnum dependencies will break all OpenStack services dependencies and lost your current environment (Trust me, i’m talking from my own experience)

git clone https://git.openstack.org/openstack/magnum -b stable/liberty

Move to your newly created folder and install Magnum (dependency requirements and Magnum)

cd magnum
sudo pip install -e .

Once Magnum is installed, create Magnum database and Magnum user

mysql -uroot -p
CREATE DATABASE IF NOT EXISTS magnum DEFAULT CHARACTER SET utf8;
GRANT ALL PRIVILEGES ON magnum.* TO'magnum'@'localhost' IDENTIFIED BY 'temporal';
GRANT ALL PRIVILEGES ON magnum.* TO'magnum'@'%' IDENTIFIED BY 'temporal';

Create Magnum folder and copy sample configuration files.

mkdir /etc/magnum
sudo cp etc/magnum/magnum.conf.sample /etc/magnum/magnum.conf
sudo cp etc/magnum/policy.json /etc/magnum/policy.json

Edit Magnum main configuration file

vi /etc/magnum/magnum.conf

Configure messaging backend to RabbitMQ

[DEFAULT]

rpc_backend = rabbit
notification_driver = messaging

Bind Magnum API port to listen on all the interfaces, you can also especify on which IP Magnum API will be listening if you are concerned about security risks.

[api]

host = 0.0.0.0

Configure RabbitMQ backend

[oslo_messaging_rabbit]

rabbit_host = 192.168.200.208
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /

Set database connection

[database]

connection=mysql://magnum:temporal@192.168.200.208/magnum

Set cert_manager_type to local, this option will disable Barbican service, you will need to create a folder (We will do it in next steps)

[certificates]

cert_manager_type = local

As all OpenStack services, Keystone authentication is required.

  • Check what your service tenant name it is (RDO default name is “services” other installations usually use “service” name.
[keystone_authtoken]

auth_uri=http://192.168.200.208:5000/v2.0
identity_uri=http://192.168.200.208:35357
auth_strategy=keystone
admin_user=magnum
admin_password=temporal
admin_tenant_name=services

As we saw before, create local certificates folder to avoid using Barbican service. This is the step we previously commented

mkdir -p /var/lib/magnum/certificates/

Clone python-magnumclient and install it, this package will provide us commands to use Magnum

git clone https://git.openstack.org/openstack/python-magnumclient -b stable/liberty
cd python-magnumclient
sudo pip install -e .

Create Magnum user at keystone

openstack user create --password temporal magnum

Add admin role to Magnum user at tenant services

openstack role add --project services --user magnum admin

Create container service

openstack service create --name magnum --description "Magnum Container Service" container

Finally create Magnum endpoints

openstack endpoint create --region RegionOne --publicurl 'http://192.168.200.208:9511/v1' --adminurl 'http://192.168.200.208:9511/v1' --internalurl 'http://192.168.200.208:9511/v1' magnum

Sync Magnum database, this step will create Magnum tables at the database

magnum-db-manage --config-file /etc/magnum/magnum.conf upgrade

Open two terminal session and execute one command on each terminal to start both services. If you encounter any issue, logs can be found at these terminal

magnum-api --config-file /etc/magnum/magnum.conf
magnum-conductor --config-file /etc/magnum/magnum.conf

Check if Magnum service is fine

magnum service-list
+----+------------+------------------+-------+
| id | host       | binary           | state |
+----+------------+------------------+-------+
| 1  | controller | magnum-conductor | up    |
+----+------------+------------------+-------+

Download fedora atomic image

wget https://fedorapeople.org/groups/magnum/fedora-21-atomic-5.qcow2

Create a Glance image with Atomic.qcow2 file

glance image-create --name fedora-21-atomic-5 \
                    --visibility public \
                    --disk-format qcow2 \
                    --os-distro fedora-atomic \
                    --container-format bare < fedora-21-atomic-5.qcow2
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | cebefc0c21fb8567e662bf9f2d5b78b0     |
| container_format | bare                                 |
| created_at       | 2016-03-19T15:55:21Z                 |
| disk_format      | qcow2                                |
| id               | 7293891d-cfba-48a9-a4db-72c29c65f681 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | fedora-21-atomic-5                   |
| os_distro        | fedora-atomic                        |
| owner            | e3cca42ed57745148e0c342a000d99e9     |
| protected        | False                                |
| size             | 891355136                            |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2016-03-19T15:55:28Z                 |
| virtual_size     | None                                 |
| visibility       | public                               |
+------------------+--------------------------------------+

Create a ssh key if not exists, this command won't create a new ssh key if already exists

test -f ~/.ssh/id_rsa.pub || ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa

Add the key to nova, mine is called egonzalez

nova keypair-add --pub-key ~/.ssh/id_rsa.pub egonzalez

Now we are going to test our new Magnum service, you have various methods to do it.
I will use Docker Swarm method because is the simplest one for this demo purposes. Go through Magnum documentation to check other container methods as Kubernetes is.

Create a baymodel with atomic image and swarm, select a flavor with at least 10GB of disk

magnum baymodel-create --name demoswarmbaymodel \
                       --image-id fedora-21-atomic-5 \
                       --keypair-id egonzalez \
                       --external-network-id public \
                       --dns-nameserver 8.8.8.8 \
                       --flavor-id testflavor \
                       --docker-volume-size 1 \
                       --coe swarm
+---------------------+--------------------------------------+
| Property            | Value                                |
+---------------------+--------------------------------------+
| http_proxy          | None                                 |
| updated_at          | None                                 |
| master_flavor_id    | None                                 |
| fixed_network       | None                                 |
| uuid                | 887edbc7-0805-4796-be78-dfcddad8eb03 |
| no_proxy            | None                                 |
| https_proxy         | None                                 |
| tls_disabled        | False                                |
| keypair_id          | egonzalez                            |
| public              | False                                |
| labels              | {}                                   |
| docker_volume_size  | 1                                    |
| external_network_id | public                               |
| cluster_distro      | fedora-atomic                        |
| image_id            | fedora-21-atomic-5                   |
| registry_enabled    | False                                |
| apiserver_port      | None                                 |
| name                | demoswarmbaymodel                    |
| created_at          | 2016-03-19T17:22:43+00:00            |
| network_driver      | None                                 |
| ssh_authorized_key  | None                                 |
| coe                 | swarm                                |
| flavor_id           | testflavor                           |
| dns_nameserver      | 8.8.8.8                              |
+---------------------+--------------------------------------+

Create a bay with the previous bay model, we are going to create one master node and one worker, specify all that apply to your environment

magnum bay-create --name demoswarmbay --baymodel demoswarmbaymodel --master-count 1 --node-count 1
+--------------------+--------------------------------------+
| Property           | Value                                |
+--------------------+--------------------------------------+
| status             | None                                 |
| uuid               | a2388916-db30-41bf-84eb-df0b65979eaf |
| status_reason      | None                                 |
| created_at         | 2016-03-19T17:23:00+00:00            |
| updated_at         | None                                 |
| bay_create_timeout | 0                                    |
| api_address        | None                                 |
| baymodel_id        | 887edbc7-0805-4796-be78-dfcddad8eb03 |
| node_count         | 1                                    |
| node_addresses     | None                                 |
| master_count       | 1                                    |
| discovery_url      | None                                 |
| name               | demoswarmbay                         |
+--------------------+--------------------------------------+

Check bay status, for now it should be in CREATE_IN_PROGRESS state

magnum bay-show demoswarmbay
+--------------------+--------------------------------------+
| Property           | Value                                |
+--------------------+--------------------------------------+
| status             | CREATE_IN_PROGRESS                   |
| uuid               | a2388916-db30-41bf-84eb-df0b65979eaf |
| status_reason      |                                      |
| created_at         | 2016-03-19T17:23:00+00:00            |
| updated_at         | 2016-03-19T17:23:01+00:00            |
| bay_create_timeout | 0                                    |
| api_address        | None                                 |
| baymodel_id        | 887edbc7-0805-4796-be78-dfcddad8eb03 |
| node_count         | 1                                    |
| node_addresses     | []                                   |
| master_count       | 1                                    |
| discovery_url      | None                                 |
| name               | demoswarmbay                         |
+--------------------+--------------------------------------+

If all is going fine, nova should have two new instances(in ACTIVE state), one for the master node and second for the worker.

nova list
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-------------------------------------------------------------------------------+
| ID                                   | Name                                                  | Status | Task State | Power State | Networks                                                                      |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-------------------------------------------------------------------------------+
| e38eb88c-bb6b-427d-a2c5-cdfe868796f0 | de-44kx2l4q4wc-0-d6j5svvjxmne-swarm_node-xafkm2jskf5j | ACTIVE | -          | Running     | demoswarmbay-agf6y3qnjoyw-fixed_network-g37bcmc52akv=10.0.0.4, 192.168.100.16 |
| 5acc579d-152a-4656-9eb8-e800b7ab3bcf | demoswarmbay-agf6y3qnjoyw-swarm_master-fllwhrpuabbq   | ACTIVE | -          | Running     | demoswarmbay-agf6y3qnjoyw-fixed_network-g37bcmc52akv=10.0.0.3, 192.168.100.15 |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-------------------------------------------------------------------------------+

You can see how heat stack is going

heat stack-list
+--------------------------------------+---------------------------+--------------------+---------------------+--------------+
| id                                   | stack_name                | stack_status       | creation_time       | updated_time |
+--------------------------------------+---------------------------+--------------------+---------------------+--------------+
| 3a64fa60-4df8-498f-aceb-a0cb8cfc0b18 | demoswarmbay-agf6y3qnjoyw | CREATE_IN_PROGRESS | 2016-03-19T17:22:59 | None         |
+--------------------------------------+---------------------------+--------------------+---------------------+--------------+

We can see what tasks are executing during stack creation

heat event-list demoswarmbay-agf6y3qnjoyw
+-------------------------------------+--------------------------------------+------------------------+--------------------+---------------------+
| resource_name                       | id                                   | resource_status_reason | resource_status    | event_time          |
+-------------------------------------+--------------------------------------+------------------------+--------------------+---------------------+
| demoswarmbay-agf6y3qnjoyw           | 004c9388-b8ab-4541-ada8-99b65203e41d | Stack CREATE started   | CREATE_IN_PROGRESS | 2016-03-19T17:23:01 |
| master_wait_handle                  | d6f0798a-bfde-4bad-9c73-e108bd101009 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:01 |
| secgroup_manager                    | e2e0eb08-aeeb-4290-9ad5-bd20fe243f07 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:02 |
| disable_selinux                     | d7290592-ab81-4d7a-b2fa-902975904a25 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:03 |
| agent_wait_handle                   | 65ec5553-56a4-4416-9748-bfa0ae35737a | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:03 |
| add_proxy                           | 46bdcff8-4606-406f-8c99-7f48adc4de57 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:03 |
| write_docker_socket                 | ab5402ea-44af-4433-84aa-a63256817a9a | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:04 |
| make_cert                           | 3b9817a5-606f-41ab-8799-b411c017f05d | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:04 |
| cfn_signal                          | 0add665a-3fdf-4408-ab15-76332aa326fe | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:04 |
| remove_docker_key                   | 94f4106e-f139-4d9f-9974-8821d04be103 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:05 |
| configure_swarm                     | f7e0ebd5-1893-43d1-bd29-81a7e39de0c0 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:05 |
| extrouter                           | a94a8f68-c237-4dbc-9513-cdbe3de1465e | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:05 |
| enable_services                     | c250f532-99bd-43d7-9d15-b2d3ae16567a | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:06 |
| write_docker_service                | 2c9d8954-4446-4578-a871-0910e8996571 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:06 |
| cloud_init_wait_handle              | 6cc51d2d-56e9-458b-a21b-bc553e0c8291 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:06 |
| fixed_network                       | 3125395f-c689-4481-bf01-94bb2f701993 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:07 |
| agent_wait_handle                   | 2db801e8-c2b5-47b0-ac16-122dba3a22d6 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:08 |
| remove_docker_key                   | 75e2c7a6-a2ce-4026-aeeb-739c4a522f48 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:08 |
| secgroup_manager                    | ac51a029-26c1-495a-bc13-232cfb8c1060 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:08 |
| write_docker_socket                 | 58e08b52-a12a-43e9-b41d-071750294024 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:08 |
| master_wait_handle                  | 3e741b76-6470-47d4-b13e-3f8f446be53c | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:08 |
| cfn_signal                          | 96c26b4f-1e99-478e-a8e5-9dcc4486e1b3 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:08 |
| enable_services                     | beedc358-ee72-4b34-a6b9-1b47ffc15306 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:08 |
| add_proxy                           | caae3a07-d5f1-4eb0-8a82-02ea634f77ae | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:08 |
| make_cert                           | 79363643-e5e4-4d1b-ad8a-5a56e1f6a8e7 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:08 |
| cloud_init_wait_handle              | 0457b008-6da8-44fd-abef-cb99bd4d0518 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:09 |
| configure_swarm                     | baf1e089-c627-4b24-a571-63b3c9c14e28 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:09 |
| extrouter                           | 184614d9-2280-4cb4-9253-f538463dbdf4 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:09 |
| write_docker_service                | 80e66b4e-d40a-4243-bb27-0d2a6b68651f | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:09 |
| disable_selinux                     | d8a64822-2571-4dcf-9da5-b3ec73e771eb | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:09 |
| fixed_network                       | 528b0ced-23f6-4c22-8cbc-357ba0ee5bc5 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:09 |
| write_swarm_manager_failure_service | 9fa100a3-b4a9-465c-8b33-dd000cb4866a | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:10 |
| write_swarm_agent_failure_service   | a7c09833-929e-4711-a3e9-39923d23b2f2 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:10 |
| fixed_subnet                        | 23d8b0a6-a7a3-4f71-9c18-ba6255cf071a | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:10 |
| write_swarm_master_service          | d24a6099-3cad-41ce-8d4b-a7ad0661aaea | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:11 |
| fixed_subnet                        | 1a2b7397-1d09-4544-bb9f-985c2f64cb09 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:13 |
| write_swarm_manager_failure_service | 615a2a7a-5266-487b-bbe1-fcaa82f43243 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:13 |
| write_swarm_agent_failure_service   | 3f8c54b4-6644-49a0-ad98-9bc6b4332a07 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:13 |
| write_swarm_master_service          | 2f58b3c8-d1cc-4590-a328-0e775e495bcf | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:13 |
| extrouter_inside                    | f3da7f2f-643e-4f29-a00f-d2595d7faeaf | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:14 |
| swarm_master_eth0                   | 1d6a510d-520c-4796-8990-aa8f7dd59757 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:16 |
| swarm_master_eth0                   | 3fd85913-7399-49be-bb46-5085ff953611 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:19 |
| extrouter_inside                    | 33749e30-cbea-4093-b36a-94967e299002 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:19 |
| write_heat_params                   | 054e0af5-e3e0-4bc0-92b5-b40aeedc39ab | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:19 |
| swarm_nodes                         | df7af58c-8148-4b51-bd65-b0734d9051b5 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:20 |
| write_swarm_agent_service           | ab1e8b1e-2837-4693-b791-e1311f85fa63 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:21 |
| swarm_master_floating               | d99ffe66-cb02-4279-99dc-a1f3e2ca817c | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:22 |
| write_heat_params                   | 33d9999f-6c93-453d-8565-ac99db021f8f | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:25 |
| write_swarm_agent_service           | 02a1b7f6-2660-4345-ad08-42b66ffaaad5 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:25 |
| swarm_master_floating               | 8ce6ecd8-c421-4e4a-ab81-cba4b5ccedf4 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:25 |
| swarm_master_init                   | 3787dcc8-e644-412b-859b-63a434b9ee6c | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:26 |
| swarm_master_init                   | a1dd67bb-49c7-4507-8af0-7758b76b57e1 | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:28 |
| swarm_master                        | d12b915e-3087-4e17-9954-8233926b504b | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:29 |
| swarm_master                        | a34ad52a-def7-460b-b5b7-410000207b3e | state changed          | CREATE_COMPLETE    | 2016-03-19T17:23:48 |
| master_wait_condition               | 0c9331a4-8ad0-46e0-bf2a-35943021a1a3 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:49 |
| cloud_init_wait_condition           | de3707a0-f46a-44a9-b4b8-ff50e12cc77f | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:49 |
| agent_wait_condition                | a1a810a4-9c19-4983-aaa8-e03f308c1e39 | state changed          | CREATE_IN_PROGRESS | 2016-03-19T17:23:49 |
+-------------------------------------+--------------------------------------+------------------------+--------------------+---------------------+

Once all tasks are completed, we can create containers in the bay we created in previous steps.

magnum container-create --name demo-container \
                        --image docker.io/cirros:latest \
                        --bay demoswarmbay \
                        --command "ping -c 4 192.168.100.2"
+------------+----------------------------------------+
| Property   | Value                                  |
+------------+----------------------------------------+
| uuid       | 36595858-8657-d465-3e5a-dfcddad8a238   |
| links      | ...                                    |
| bay_uuid   | a2388916-db30-41bf-84eb-df0b65979eaf   |
| updated_at | None                                   |
| image      | cirros                                 |
| command    | ping -c 4 192.168.100.2                |
| created_at | 2016-03-19T17:30:00+00:00              |
| name       | demo-container                         |
+------------+----------------------------------------+

Container is created, but not started.
Start the container

magnum container-start demo-container

Check container logs, you should see 4 pings succeed to our external router gateway.

magnum container-logs demo-container

PING 192.168.100.2 (192.168.100.2) 56(84) bytes of data.
64 bytes from 192.168.100.2: icmp_seq=1 ttl=64 time=0.083 ms
64 bytes from 192.168.100.2: icmp_seq=2 ttl=64 time=0.068 ms
64 bytes from 192.168.100.2: icmp_seq=3 ttl=64 time=0.043 ms
64 bytes from 192.168.100.2: icmp_seq=4 ttl=64 time=0.099 ms

You can delete the container

magnum container-delete demo-container

While doing this demo, i missed adding branch name while cloning Magnum source code, when i installed Magnum all package dependencies where installed from master, who was Mitaka instead of Liberty, which broke my environment.

I suffered the following issues:

Issues with packages

ImportError: No module named MySQLdb

Was solved installing MySQL-python from pip instead of yum

pip install MySQL-python

Issues with policies, admin privileges weren't recognized by Magnum api.

PolicyNotAuthorized: magnum-service:get_all{{ bunch of stuff }} disallowed by policy

Was solved removing admin_api rule at Magnum policy.json file

vi /etc/magnum/policy.json

#    "admin_api": "rule:context_is_admin",

Unfortunately, nova was completely broken and it was not working at all, so i installed a new environment and added branch while cloning source code.
Next issue i found was Barbican, who was not installed, i used the steps mentioned at this post to solve this issue.

Hope this guide helps you integrating Magnum Container as a Service in OpenStack.

Regards, Eduardo Gonzalez

1 2 3 4 18
%d bloggers like this: