Deploy OpenStack Designate with Kolla

During Ocata release, OpenStack DNS-as-a-Service (Designate) support was implemented in OpenStack kolla project.

This post will guide you through a basic deployment and tests of designate service.

Install required dependencies and tools for kolla-ansible and designate.

# yum install -y epel-release
# yum install -y python-pip python-devel libffi-devel gcc openssl-devel ansible ntp wget bind-utils
# pip install -U pip

Install Docker and downgrade to 1.12.6. At the time of writing this post libvirt had issues to connect with D-Bus due SElinux issues with Docker 1.13.

# curl -sSL https://get.docker.io | bash
# yum downgrade docker-engine-1.12.6 docker-engine-selinux-1.12.6
# yum install -y python-docker-py

Configure Docker daemon to allow insecure-registry (Use the IP where your remote registry will be located).

# mkdir -p /etc/systemd/system/docker.service.d
# tee /etc/systemd/system/docker.service.d/kolla.conf <<-'EOF'
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --insecure-registry 172.28.128.3:4000
MountFlags=shared
EOF

Reload systemd daemons and start/stop/disable/enable the following services.

# systemctl daemon-reload
# systemctl stop libvirtd
# systemctl disable libvirtd
# systemctl enable ntpd docker
# systemctl start ntpd docker

Download Ocata registry created in tarballs.openstack.org, skip this step if images used are custom builds or downloaded from DockerHub.
Create kolla registry from downloaded tarball.

# wget https://tarballs.openstack.org/kolla/images/centos-binary-registry-ocata.tar.gz
# mkdir /opt/kolla_registry
# sudo tar xzf centos-binary-registry-ocata.tar.gz -C /opt/kolla_registry
# docker run -d -p 4000:5000 --restart=always -v /opt/kolla_registry/:/var/lib/registry --name registry registry:2

Install kolla-ansible.

# pip install kolla-ansible
# cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/kolla/
# cp /usr/share/kolla-ansible/ansible/inventory/* .

Configure kolla globals.yml configuration file with the following content.
Change values when necessary (IP addresses, interface names).
This is a sample minimal configuration.

# vi /etc/kolla/globals.yml
---
kolla_internal_vip_address: "172.28.128.10"
kolla_base_distro: "centos"
kolla_install_type: "binary"
docker_registry: "172.28.128.3:4000"
docker_namespace: "lokolla"
network_interface: "enp0s8"
neutron_external_interface: "enp0s9"

Configure designate options in globals.yml.
dns_interface must be network reachable from nova instances if internal DNS resolution is needed.

enable_designate: "yes"
dns_interface: "enp0s8"
designate_backend: "bind9"
designate_ns_record: "sample.openstack.org"

Configure inventory, add the nodes in their respective groups.

# vi ~/multinode

Generate passwords.

# kolla-genpwd

Ensure the environment is ready to deploy with prechecks.
Until prechecks does not succeed do not start deployment.
Fix what is necessary.

# kolla-ansible prechecks -i ~/multinode

Pull Docker images on the servers, this can be skipped because will be made in deploy step, but doing it first will ensure all the nodes have the images you need and will minimize the deployment time.

# kolla-ansible pull -i ~/multinode

Deploy kolla-ansible and do a woot for kolla πŸ˜‰

# kolla-ansible deploy -i ~/multinode

Create credentials file and source it.

# kolla-ansible post-deploy -i ~/multinode
# source /etc/kolla/admin-openrc.sh

Check that all containers are running and none of them are restarting or exiting.

# docker ps -a --filter status=exited --filter status=restarting
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Install required python clients

# pip install python-openstackclient python-designateclient python-neutronclient

Execute a base OpenStack configuration (public and internal networks, cirros image).
Do no execute this script if custom networks are going to be used.

# sh /usr/share/kolla-ansible/init-runonce

Create a sample designate zone.

# openstack zone create --email admin@sample.openstack.org sample.openstack.org.
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| action         | CREATE                               |
| attributes     |                                      |
| created_at     | 2017-02-22T13:14:39.000000           |
| description    | None                                 |
| email          | admin@sample.openstack.org           |
| id             | 4a44b0c9-bd07-4f5c-8908-523f453f269d |
| masters        |                                      |
| name           | sample.openstack.org.                |
| pool_id        | 85d18aec-453e-45ae-9eb3-748841a1da12 |
| project_id     | 937d49af6cfe4ef080a79f9a833d7c7d     |
| serial         | 1487769279                           |
| status         | PENDING                              |
| transferred_at | None                                 |
| ttl            | 3600                                 |
| type           | PRIMARY                              |
| updated_at     | None                                 |
| version        | 1                                    |
+----------------+--------------------------------------+

Configure designate sink to make use of the previously created zone, sink will need zone_id to automatically create neutron and nova records into designate.

# mkdir -p /etc/kolla/config/designate/designate-sink/
# vi /etc/kolla/config/designate/designate-sink.conf
[handler:nova_fixed]
zone_id = 4a44b0c9-bd07-4f5c-8908-523f453f269d
[handler:neutron_floatingip]
zone_id = 4a44b0c9-bd07-4f5c-8908-523f453f269d

After configure designate-sink.conf, reconfigure designate to make use of this configuration.

# kolla-ansible reconfigure -i ~/multinode --tags designate

List networks.

# neutron net-list
+--------------------------------------+----------+----------------------------------+--------------------------------------------------+
| id                                   | name     | tenant_id                        | subnets                                          |
+--------------------------------------+----------+----------------------------------+--------------------------------------------------+
| 3b56c605-5a01-45be-9ed6-e4c3285e4366 | demo-net | 937d49af6cfe4ef080a79f9a833d7c7d | 7f28f050-77b2-426e-b963-35b682077993 10.0.0.0/24 |
| 6954d495-fb8c-4b0b-98a9-9672a7f65b7c | public1  | 937d49af6cfe4ef080a79f9a833d7c7d | 9bd9feca-40a7-4e82-b912-e51b726ad746 10.0.2.0/24 |
+--------------------------------------+----------+----------------------------------+--------------------------------------------------+

Update the network with a dns_domain.

# neutron net-update 3b56c605-5a01-45be-9ed6-e4c3285e4366 --dns_domain sample.openstack.org.
Updated network: 3b56c605-5a01-45be-9ed6-e4c3285e4366

Ensure dns_domain is properly applied.

# neutron net-show 3b56c605-5a01-45be-9ed6-e4c3285e4366
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2017-02-22T13:13:06Z                 |
| description               |                                      |
| dns_domain                | sample.openstack.org.                |
| id                        | 3b56c605-5a01-45be-9ed6-e4c3285e4366 |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1450                                 |
| name                      | demo-net                             |
| port_security_enabled     | True                                 |
| project_id                | 937d49af6cfe4ef080a79f9a833d7c7d     |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 27                                   |
| revision_number           | 6                                    |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 7f28f050-77b2-426e-b963-35b682077993 |
| tags                      |                                      |
| tenant_id                 | 937d49af6cfe4ef080a79f9a833d7c7d     |
| updated_at                | 2017-02-22T13:25:16Z                 |
+---------------------------+--------------------------------------+

Create a new instance in the previously updated network.

# openstack server create \
    --image cirros \
    --flavor m1.tiny \
    --key-name mykey \
    --nic net-id=3b56c605-5a01-45be-9ed6-e4c3285e4366 \
    demo1

Once the instance is ACTIVE, check the IP associated.

# openstack server list
+--------------------------------------+-------+--------+-------------------+------------+
| ID                                   | Name  | Status | Networks          | Image Name |
+--------------------------------------+-------+--------+-------------------+------------+
| d483e4ee-58c2-4e1e-9384-85174630428e | demo1 | ACTIVE | demo-net=10.0.0.3 | cirros     |
+--------------------------------------+-------+--------+-------------------+------------+

List records in the designate zone.
As you can see there is a record in designate associated with the instance IP.

# openstack recordset list sample.openstack.org.
+--------------------------------------+----------------------------------+------+-------------------------------------------+--------+--------+
| id                                   | name                             | type | records                                   | status | action |
+--------------------------------------+----------------------------------+------+-------------------------------------------+--------+--------+
| 4f70531e-c325-4ffd-a8d3-8172bd5163b8 | sample.openstack.org.            | SOA  | sample.openstack.org.                     | ACTIVE | NONE   |
|                                      |                                  |      | admin.sample.openstack.org. 1487770304    |        |        |
|                                      |                                  |      | 3586 600 86400 3600                       |        |        |
| a9a09c5f-ccf1-4b52-8400-f36e8faa9549 | sample.openstack.org.            | NS   | sample.openstack.org.                     | ACTIVE | NONE   |
| aa6cd25d-186e-425b-9153-699d8b0811de | 10-0-0-3.sample.openstack.org.   | A    | 10.0.0.3                                  | ACTIVE | NONE   |
| 713650a5-a45e-470b-9539-74e110b15115 | demo1.None.sample.openstack.org. | A    | 10.0.0.3                                  | ACTIVE | NONE   |
| 6506e6f6-f535-45eb-9bfb-4ac1f16c5c9b | demo1.sample.openstack.org.      | A    | 10.0.0.3                                  | ACTIVE | NONE   |
+--------------------------------------+----------------------------------+------+-------------------------------------------+--------+--------+

Validate that designate resolves the DNS record.
You can use designate mDNS service or directly to bind9 servers to validate the test.

# dig +short -p 5354 @172.28.128.3 demo1.sample.openstack.org. A
10.0.0.3
# dig +short -p 53 @172.28.128.3 demo1.sample.openstack.org. A
10.0.0.3

If you find any issue with designate in kolla-ansible or kolla, please fill a bug https://bugs.launchpad.net/kolla-ansible/+filebug

Regards,
Eduardo Gonzalez

OpenStack Keystone Zero-Downtime upgrade process (N to O)

This blog post will show Keystone upgrade procedure from OpenStack Newton to Ocata release with zero-downtime.

In the case of doing this in production, please read release notes, ensure a proper configuration, do database backups and test the upgrade a thousand times.

Keystone upgrade will need to stop one node in order to use it as upgrade server.
In the case of a PoC this is not an issue, but in a production environment, Keystone loads may be intensive and stopping a node for a while may decrease other nodes performance more than expected.
For this reason I prefer orchestrate the upgrade from an external Docker container. With this method all nodes will be fully running almost all the time.

  • New container won’t start any service, just will sync the database schema with new Keystone version avoiding stop a node to orchestrate the upgrade.
  • The Docker image is provided by OpenStack Kolla project, if already using Kolla this upgrade won’t be needed as kolla-ansible already provide an upgrade method.
  • At the moment of writing of this blog, Ocata packages were not released into stable repositories. For this reason I use DLRN repositories.
  • If Ocata is released please do not use DLRN, use stable packages instead.
  • Use stable Ocata Docker image if available with tag 4.0.x and will avoid repository configuration and package upgrades.
  • NOTE: Upgrade may need more steps depending of your own configuration, i.e, if using fernet token more steps are necessary during the upgrade.
  • All Keystone nodes are behind HAproxy.

 

Prepare the upgrade

Start Keystone Docker container with host networking (needed to communicate with database nodes directly) and root user (needed to install packages).

(host)# docker run -ti --net host -u 0 kolla/centos-binary-keystone:3.0.2 bash

Download Delorean CentOS trunk repositories

(keystone-upgrade)# curl -Lo /etc/yum.repos.d/delorean.repo http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tested/delorean.repo
(keystone-upgrade)# curl -Lo /etc/yum.repos.d/delorean-deps.repo http://trunk.rdoproject.org/centos7/delorean-deps.repo

Disable Newton repository

(keystone-upgrade)# yum-config-manager --disable centos-openstack-newton

Ensure Newton repository is not longer used by the system

(keystone-upgrade)# yum repolist | grep -i openstack
delorean                        delorean-openstack-glance-0bf9d805886c2  565+255

Update all packages in the Docker container to bump keystone version to Ocata.

(keystone-upgrade)# yum clean all && yum update -y

Configure keystone.conf file, this are my settings. Review you configuration and ensure all is correctly, otherwise may cause issues in the database.
An important option is default_domain_id, this value is for backward compatible with users created under default domain.

(keystone-upgrade)# egrep ^[^#] /etc/keystone/keystone.conf 
[DEFAULT]
debug = False
log_file = /var/log/keystone/keystone.log
secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO
[database]
connection = mysql+pymysql://keystone:ickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB@192.168.100.10:3306/keystone
max_retries = -1
[cache]
backend = oslo_cache.memcache_pool
enabled = True
memcache_servers = 192.168.100.215:11211,192.168.100.170:11211
[identity]
default_domain_id = default
[token]
provider = uuid

Check migrate version in the database.
As you will notice, contract/data_migrate/expand are in the same version

(mariadb)# mysql -ukeystone -pickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB -h192.168.100.10 keystone -e "select * from migrate_version;" 
Warning: Using a password on the command line interface can be insecure.
+-----------------------+--------------------------------------------------------------------------+---------+
| repository_id         | repository_path                                                          | version |
+-----------------------+--------------------------------------------------------------------------+---------+
| keystone              | /usr/lib/python2.7/site-packages/keystone/common/sql/migrate_repo        |     109 |
| keystone_contract     | /usr/lib/python2.7/site-packages/keystone/common/sql/contract_repo       |       4 |
| keystone_data_migrate | /usr/lib/python2.7/site-packages/keystone/common/sql/data_migration_repo |       4 |
| keystone_expand       | /usr/lib/python2.7/site-packages/keystone/common/sql/expand_repo         |       4 |
+-----------------------+--------------------------------------------------------------------------+---------+

Before start upgrading the database schema, you will need add SUPER privileges in the database to keystone user or set log_bin_trust_function_creators to True.
In my opinion is safer set the value to True, I don’t want keystone with SUPER privileges.

(mariadb)# mysql -uroot -pnkLMrBibfMTRqOGBAP3UAxdO4kOFfEaPptGM5UDL -h192.168.100.10 keystone -e "set global log_bin_trust_function_creators=1;"

Now use Rally, tempest or some tool to test/benchmarch keystone service during upgrade.
If don’t want to use one of those tools, just use this for command.

(host)# for i in {1000..6000} ; do openstack user create --password $i $i; done

 

Start Upgrade

Check database status before upgrade using Doctor, this may raise issues in the configuration. Some of them may be ignored(Please, ensure is not an issue before ignoring).
As example, I’m not using fernet tokens and errors appear about missing folder.

(keystone-upgrade)# keystone-manage doctor

Remove obsoleted tokens

(keystone-upgrade)# keystone-manage token_flush

Now, expand the database schema to latest version, in keystone.log can see the status.
Check in the logs if some error is raised before jump to the next step.

(keystone-upgrade)# keystone-manage db_sync --expand


2017-01-31 13:42:02.772 306 INFO migrate.versioning.api [-] 4 -> 5... 
2017-01-31 13:42:03.004 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.005 306 INFO migrate.versioning.api [-] 5 -> 6... 
2017-01-31 13:42:03.310 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.310 306 INFO migrate.versioning.api [-] 6 -> 7... 
2017-01-31 13:42:03.670 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.671 306 INFO migrate.versioning.api [-] 7 -> 8... 
2017-01-31 13:42:03.984 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.985 306 INFO migrate.versioning.api [-] 8 -> 9... 
2017-01-31 13:42:04.185 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:04.185 306 INFO migrate.versioning.api [-] 9 -> 10... 
2017-01-31 13:42:07.202 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:07.202 306 INFO migrate.versioning.api [-] 10 -> 11... 
2017-01-31 13:42:07.481 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:07.481 306 INFO migrate.versioning.api [-] 11 -> 12... 
2017-01-31 13:42:11.334 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:11.334 306 INFO migrate.versioning.api [-] 12 -> 13... 
2017-01-31 13:42:11.560 306 INFO migrate.versioning.api [-] done

After expand the database, migrate it to latest version.
Ensure there are not errors in Keystone logs.

(keystone-upgrade)# keystone-manage db_sync --migrate

#keystone.log
2017-01-31 13:42:58.771 314 INFO migrate.versioning.api [-] 4 -> 5... 
2017-01-31 13:42:58.943 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:58.943 314 INFO migrate.versioning.api [-] 5 -> 6... 
2017-01-31 13:42:59.143 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.143 314 INFO migrate.versioning.api [-] 6 -> 7... 
2017-01-31 13:42:59.340 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.341 314 INFO migrate.versioning.api [-] 7 -> 8... 
2017-01-31 13:42:59.698 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.699 314 INFO migrate.versioning.api [-] 8 -> 9... 
2017-01-31 13:42:59.852 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.852 314 INFO migrate.versioning.api [-] 9 -> 10... 
2017-01-31 13:43:00.135 314 INFO migrate.versioning.api [-] done
2017-01-31 13:43:00.135 314 INFO migrate.versioning.api [-] 10 -> 11... 
2017-01-31 13:43:00.545 314 INFO migrate.versioning.api [-] done
2017-01-31 13:43:00.545 314 INFO migrate.versioning.api [-] 11 -> 12... 
2017-01-31 13:43:00.703 314 INFO migrate.versioning.api [-] done
2017-01-31 13:43:00.703 314 INFO migrate.versioning.api [-] 12 -> 13... 
2017-01-31 13:43:00.854 314 INFO migrate.versioning.api [-] done

Now, see migrate_version table, you will notice that expand and data_migrate are in the latest version, but contract still in the previous version.

(mariadb)# mysql -ukeystone -pickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB -h192.168.100.10 keystone -e "select * from migrate_version;"
+-----------------------+--------------------------------------------------------------------------+---------+
| repository_id         | repository_path                                                          | version |
+-----------------------+--------------------------------------------------------------------------+---------+
| keystone              | /usr/lib/python2.7/site-packages/keystone/common/sql/migrate_repo        |     109 |
| keystone_contract     | /usr/lib/python2.7/site-packages/keystone/common/sql/contract_repo       |       4 |
| keystone_data_migrate | /usr/lib/python2.7/site-packages/keystone/common/sql/data_migration_repo |      13 |
| keystone_expand       | /usr/lib/python2.7/site-packages/keystone/common/sql/expand_repo         |      13 |
+-----------------------+--------------------------------------------------------------------------+---------+

 

Every Keystone node, one by one

Go to keystone nodes.
Stop Keystone services, in my case using wsgi inside Apache

(keystone_nodes)# systemctl stop httpd

Configure Ocata repositories as made in the Docker container.
Update packages, if you have Keystone sharing the node with other OpenStack service, do not update all packages as it will break other services.
Update only required packages.

(keystone_nodes)# yum clean all && yum update -y

Configure Keystone configuration file to the desired state. Your configuration may change.

(keystone_nodes)# egrep ^[^#] /etc/keystone/keystone.conf 
[DEFAULT]
debug = False
log_file = /var/log/keystone/keystone.log
secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO
[database]
connection = mysql+pymysql://keystone:ickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB@192.168.100.10:3306/keystone
max_retries = -1
[cache]
backend = oslo_cache.memcache_pool
enabled = True
memcache_servers = 192.168.100.215:11211,192.168.100.170:11211
[identity]
default_domain_id = default
[token]
provider = uuid

Start Keystone service.

(keystone_nodes)# systemctl start httpd

 

Finish Upgrade

After all the nodes are updated to latest version (please ensure all nodes are using latest packages, if not will fail).
Contract Keystone database schema.
Look at keystone.log for errors.

(keystone-upgrade)# keystone-manage db_sync --contract

keystone.log

2017-01-31 13:57:52.164 322 INFO migrate.versioning.api [-] 4 -> 5... 
2017-01-31 13:57:52.379 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:52.379 322 INFO migrate.versioning.api [-] 5 -> 6... 
2017-01-31 13:57:52.969 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:52.969 322 INFO migrate.versioning.api [-] 6 -> 7... 
2017-01-31 13:57:53.462 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:53.462 322 INFO migrate.versioning.api [-] 7 -> 8... 
2017-01-31 13:57:53.793 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:53.793 322 INFO migrate.versioning.api [-] 8 -> 9... 
2017-01-31 13:57:53.957 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:53.957 322 INFO migrate.versioning.api [-] 9 -> 10... 
2017-01-31 13:57:54.111 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:54.112 322 INFO migrate.versioning.api [-] 10 -> 11... 
2017-01-31 13:57:54.853 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:54.853 322 INFO migrate.versioning.api [-] 11 -> 12... 
2017-01-31 13:57:56.727 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:56.728 322 INFO migrate.versioning.api [-] 12 -> 13... 
2017-01-31 13:57:59.529 322 INFO migrate.versioning.api [-] done

Now if we look at migrate_version table, will see that contract version is latest and match with the other version (Ensure all are in the same version).
This means the database upgrade has been successfully implemented.

(mariadb)# mysql -ukeystone -pickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB -h192.168.100.10 keystone -e "select * from migrate_version;"
+-----------------------+--------------------------------------------------------------------------+---------+
| repository_id         | repository_path                                                          | version |
+-----------------------+--------------------------------------------------------------------------+---------+
| keystone              | /usr/lib/python2.7/site-packages/keystone/common/sql/migrate_repo        |     109 |
| keystone_contract     | /usr/lib/python2.7/site-packages/keystone/common/sql/contract_repo       |      13 |
| keystone_data_migrate | /usr/lib/python2.7/site-packages/keystone/common/sql/data_migration_repo |      13 |
| keystone_expand       | /usr/lib/python2.7/site-packages/keystone/common/sql/expand_repo         |      13 |
+-----------------------+--------------------------------------------------------------------------+---------+

Remove log_bin_trust_function_creators value.

(mariadb)# mysql -uroot -pnkLMrBibfMTRqOGBAP3UAxdO4kOFfEaPptGM5UDL -h192.168.100.10 keystone -e "set global log_bin_trust_function_creators=0;"

After finish the upgrade, Rally tests should not have any error. **If using HAproxy for load balance Keystone service, some errors may happen due a connection drop while stopping Keystone service and re-balance to other Keystone node. This can be avoided putting the node to update in Maintenance Mode in HAproxy backend.

Have to thank Keystone team in #openstack-keystone IRC channel for the help provided with a couple of issues.

Regards, Eduardo Gonzalez

MidoNet Integration with OpenStack Mitaka

MidoNet is an Open Source network virtualization software for IaaS infrastructure.
It decouples your IaaS cloud from your network hardware, creating an intelligent software abstraction layer between your end hosts and your physical network.
This network abstraction layer allows the cloud operator to move what has traditionally been hardware-based network appliances into a software-based multi-tenant virtual domain.

This definition from MidoNet documentation explains what MidoNet is and what MidoNet does.

At this I will post cover my experiences integrating MidoNet with OpenStack.
I used the following configurations:

All servers have CentOS 7.2 installed
OpenStack has been previously installed from RDO packages with multinode Packstack

  • x3 NSDB nodes (Casandra and Zookeeper services)
  • x2 Gateway Nodes (Midolman Agent)
  • x1 OpenStack Controller (MidoNet Cluster)
  • x1 OpenStack compute node (Midolman Agent)


NSDB NODES

Disable SElinux

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux

Install OpenStack Mitaka release repository

sudo yum install -y centos-release-openstack-mitaka

Add Cassandra repository

cat <<EOF>/etc/yum.repos.d/datastax.repo
[datastax]
name = DataStax Repo for Apache Cassandra
baseurl = http://rpm.datastax.com/community
enabled = 1
gpgcheck = 1
gpgkey = https://rpm.datastax.com/rpm/repo_key
EOF

Add Midonet repository

cat <<EOF>/etc/yum.repos.d/midonet.repo
[midonet]
name=MidoNet
baseurl=http://builds.midonet.org/midonet-5.2/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-openstack-integration]
name=MidoNet OpenStack Integration
baseurl=http://builds.midonet.org/openstack-mitaka/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-misc]
name=MidoNet 3rd Party Tools and Libraries
baseurl=http://builds.midonet.org/misc/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key
EOF

Clean repo cache and update packages

yum clean all
yum update

Zookeeper Configuration
Install Zookeeper, java and dependencies

yum install -y java-1.7.0-openjdk-headless zookeeper zkdump nmap-ncat

Edit zookeeper configuration file

vi /etc/zookeeper/zoo.cfg

Add all NSDB nodes at the configuration file

server.1=nsdb1:2888:3888
server.2=nsdb2:2888:3888
server.3=nsdb3:2888:3888
autopurge.snapRetainCount=10
autopurge.purgeInterval =12

Create zookeeper folder on which zookeeper will store data, change the owner to zookeeper user

mkdir /var/lib/zookeeper/data
chown zookeeper:zookeeper /var/lib/zookeeper/data

Create myid file at zookeeper data folder, the ID should match with the NSDB node number, insert that number as follows:

#NSDB1
echo 1 > /var/lib/zookeeper/data/myid
#NSDB2
echo 2 > /var/lib/zookeeper/data/myid
#NSDB3
echo 3 > /var/lib/zookeeper/data/myid

Create java folder and create a softlink to it

mkdir -p /usr/java/default/bin/
ln -s /usr/lib/jvm/jre-1.7.0-openjdk/bin/java /usr/java/default/bin/java

Start and enable Zookeeper service

systemctl enable zookeeper.service
systemctl start zookeeper.service

Test if zookeeper is working locally

echo ruok | nc 127.0.0.1 2181
imok

Test if zookeeper is working at NSDB remote nodes

echo stat | nc nsdb3 2181

Zookeeper version: 3.4.5--1, built on 02/08/2013 12:25 GMT
Clients:
 /192.168.100.172:35306[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x100000000
Mode: follower
Node count: 4

Cassandra configuration
Install Java and Cassandra dependencies

yum install -y java-1.8.0-openjdk-headless dsc22

Edit cassandra yaml file

vi /etc/cassandra/conf/cassandra.yaml

Change cluster_name to midonet
Configure seed_provider seeds to match all NSDB nodes
Configure listen_address and rpc_address to match the hostname of the self node

cluster_name: 'midonet'
....
seed_provider:
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
      parameters:
          - seeds: "nsdb1,nsdb2,nsdb3"

listen_address: nsdb1
rpc_address: nsdb1

Edit cassandra’s init script in order to fix a bug in the init script

vi /etc/init.d/cassandra

Add the next two lines after #Casandra startup

case "$1" in
    start)
        # Cassandra startup
        echo -n "Starting Cassandra: "
        mkdir -p /var/run/cassandra #Add this line
        chown cassandra:cassandra /var/run/cassandra #Add this line
        su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
        retval=$?
        [ $retval -eq 0 ] && touch $lock_file
        echo "OK"
        ;;

Start and enable Cassandra service

systemctl enable cassandra.service
systemctl start cassandra.service

Check if all NSDB nodes join the cluster

nodetool --host 127.0.0.1 status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address          Load       Tokens       Owns (effective)  Host ID                               Rack
UN  192.168.100.172  89.1 KB    256          70.8%             3f1ecedd-8caf-4938-84ad-8614d2134557  rack1
UN  192.168.100.224  67.64 KB   256          60.7%             cb36c999-a6e1-4d98-a4dd-d4230b41df08  rack1
UN  192.168.100.195  25.78 KB   256          68.6%             4758bae8-9300-4e57-9a61-5b1107082964  rack1

Configure OpenStack Controller Nodes (On which Neutron Server is running)

Disable SElinux

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux

Install OpenStack Mitaka release repository

sudo yum install -y centos-release-openstack-mitaka

Add Midonet Repository

cat <<EOF>/etc/yum.repos.d/midonet.repo
[midonet]
name=MidoNet
baseurl=http://builds.midonet.org/midonet-5.2/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-openstack-integration]
name=MidoNet OpenStack Integration
baseurl=http://builds.midonet.org/openstack-mitaka/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-misc]
name=MidoNet 3rd Party Tools and Libraries
baseurl=http://builds.midonet.org/misc/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key
EOF

Clean repos cache and update the system

yum clean all
yum update

Create an OpenStack user for MidoNet, change the password to match your own

# openstack user create --password temporal midonet
+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | ac25c5a77e7c4e4598ccadea89e09969 |
| name     | midonet                          |
| username | midonet                          |
+----------+----------------------------------+

Add admin role at tenant services to Midonet user

# openstack role add --project services --user midonet admin
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | None                             |
| id        | bca2c6e1f3da42b0ba82aee401398a8a |
| name      | admin                            |
+-----------+----------------------------------+

Create MidoNet service at Keystone

# openstack service create --name midonet --description "MidoNet API Service" midonet
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | MidoNet API Service              |
| enabled     | True                             |
| id          | 499059c4a3a040cfb632411408a2be4c |
| name        | midonet                          |
| type        | midonet                          |
+-------------+----------------------------------+

Clean up neutron server
Stop neutron services

openstack-service stop neutron

Remove neutron database and recreate it again

mysql -u root -p
DROP DATABASE neutron;
Query OK, 157 rows affected (11.50 sec)

MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'ab4f81b1040a495e';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'ab4f81b1040a495e';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit
Bye

Remove plugin.ini symbolic link to ml2_conf.ini

#rm /etc/neutron/plugin.ini 
rm: remove symbolic link β€˜/etc/neutron/plugin.ini’? y

Remove br-tun tunnel used by neutron in all the nodes

ovs-vsctl del-br br-tun

Install MidoNet packages and remove ml2 package

yum install -y openstack-neutron python-networking-midonet python-neutronclient
yum remove openstack-neutron-ml2

Make a backup of neutron configuration file

cp /etc/neutron/neutron.conf neutron.conf.bak

Edit neutron configuration file

vi /etc/neutron/neutron.conf

Most of the options are already configured by our older neutron configuration, change the ones who apply to match this configuration

[DEFAULT]
core_plugin = midonet.neutron.plugin_v2.MidonetPluginV2
service_plugins = midonet.neutron.services.l3.l3_midonet.MidonetL3ServicePlugin
dhcp_agent_notification = False
allow_overlapping_ips = True
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
nova_url = http://controller:8774/v2

[database]
connection = mysql+pymysql://neutron:ab4f81b1040a495e@controller/neutron

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = guest
rabbit_password = guest

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
admin_user=neutron
admin_tenant_name=services
identity_uri=http://controller:35357
admin_password=d88f0bd060d64c33

[nova]
region_name = RegionOne
auth_url = http://controller:35357
auth_type = password
password = 9ca36d15e4824d93
project_domain_id = default
project_name = services
tenant_name = services
user_domain_id = default
username = nova

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

At my deployment these are the options I had to change to configure midonet

diff /etc/neutron/neutron.conf neutron.conf.bak 
33c33
< core_plugin = midonet.neutron.plugin_v2.MidonetPluginV2
---
> core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
37c37
< service_plugins = midonet.neutron.services.l3.l3_midonet.MidonetL3ServicePlugin
---
> service_plugins =router
120c120
< dhcp_agent_notification = False
---
> #dhcp_agent_notification = true
1087c1087,1088
< lock_path = /var/lib/neutron/tmp
---
> lock_path = $state_path/lock
> 

Create midonet plugins folder

mkdir /etc/neutron/plugins/midonet

Create a file called midonet.ini

vi /etc/neutron/plugins/midonet/midonet.ini

Configure midonet.ini file to match your own configuration options

[MIDONET]
# MidoNet API URL
midonet_uri = http://controller:8181/midonet-api
# MidoNet administrative user in Keystone
username = midonet
password = temporal
# MidoNet administrative user's tenant
project_id = services

Create a symbolic link from midonet.ini to plugin.ini

ln -s /etc/neutron/plugins/midonet/midonet.ini /etc/neutron/plugin.ini

Sync and populate database tables with Midonet plugin

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/midonet/midonet.ini upgrade head" neutron
su -s /bin/sh -c "neutron-db-manage --subproject networking-midonet upgrade head" neutron

Restart nova api and neutron server services

systemctl restart openstack-nova-api.service
systemctl restart neutron-server

Install midonet cluster package

yum install -y midonet-cluster

Configure midonet.conf file

vi /etc/midonet/midonet.conf

Add all NSDB nodes at zookeeper_hosts

[zookeeper]
zookeeper_hosts = nsdb1:2181,nsdb2:2181,nsdb3:2181

Configure midonet to make use of NSDB nodes as Zookeeper and cassandra hosts

cat << EOF | mn-conf set -t default
zookeeper {
    zookeeper_hosts = "nsdb1:2181,nsdb2:2181,nsdb3:2181"
}

cassandra {
    servers = "nsdb1,nsdb2,nsdb3"
}
EOF

Set cassandra replication factor to 3

echo "cassandra.replication_factor : 3" | mn-conf set -t default

Grab your admin token

#egrep ^admin_token /etc/keystone/keystone.conf 
admin_token = 7b84d89b32c34b71a697eb1a270807ab

Configure Midonet to auth with keystone

cat << EOF | mn-conf set -t default
cluster.auth {
    provider_class = "org.midonet.cluster.auth.keystone.KeystoneService"
    admin_role = "admin"
    keystone.tenant_name = "admin"
    keystone.admin_token = "7b84d89b32c34b71a697eb1a270807ab"
    keystone.host = controller
    keystone.port = 35357
}
EOF

Start and enable midonet cluster service

systemctl enable midonet-cluster.service
systemctl start midonet-cluster.service

Install midonet CLI

yum install -y python-midonetclient

Create a file at you home directory with midonet auth info

#vi ~/.midonetrc

[cli]
api_url = http://controller:8181/midonet-api
username = admin
password = temporal
project_id = admin

Configure Compute nodes

Disable SElinux

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux

Install OpenStack Mitaka release repository

sudo yum install -y centos-release-openstack-mitaka

Add Midonet repository

cat <<EOF>/etc/yum.repos.d/midonet.repo
[midonet]
name=MidoNet
baseurl=http://builds.midonet.org/midonet-5.2/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-openstack-integration]
name=MidoNet OpenStack Integration
baseurl=http://builds.midonet.org/openstack-mitaka/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-misc]
name=MidoNet 3rd Party Tools and Libraries
baseurl=http://builds.midonet.org/misc/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key
EOF

Clean repos cache and update the system

yum clean all
yum update

Edit qemu.conf

vi /etc/libvirt/qemu.conf

Configure with the following options, by default all these options are commented, you can paste it all wherever you want

user = "root"
group = "root"

cgroup_device_acl = [
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
    "/dev/net/tun"
]

Restart libvirtd service

systemctl restart libvirtd.service

Install nova-network package

yum install -y openstack-nova-network

Disable Nova Network service and restart Nova compute service

systemctl disable openstack-nova-network.service
systemctl restart openstack-nova-compute.service

Install Midolman agent and java packages

yum install -y java-1.8.0-openjdk-headless midolman

Configure midolman.conf

vi /etc/midolman/midolman.conf

Add all nsdb nodes as zookeeper hosts

[zookeeper]
zookeeper_hosts = nsdb1:2181,nsdb2:2181,nsdb3:2181

Configure each compute node with an appropiate flavor located at /etc/midolman/ folder, the have different hardware resources configured, use the one that better match your compute host capabilities

mn-conf template-set -h local -t agent-compute-medium
cp /etc/midolman/midolman-env.sh.compute.medium /etc/midolman/midolman-env.sh

Configure metadata, issue the following commands only once, it will automatically populate the configuration to all midonet agents

echo "agent.openstack.metadata.nova_metadata_url : \"http://controller:8775\"" | mn-conf set -t default
echo "agent.openstack.metadata.shared_secret : 2bfeb930a90d435d" | mn-conf set -t default
echo "agent.openstack.metadata.enabled : true" | mn-conf set -t default

Allow metadata trafic at iptables

iptables -I INPUT 1 -i metadata -j ACCEPT

Remove br-tun bridge

ovs-vsctl del-br br-tun

Start and enable midolman agent service

systemctl enable midolman.service
systemctl start midolman.service

Gateway nodes configuration

Disable SElinux

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux

Install OpenStack Mitaka release repository

sudo yum install -y centos-release-openstack-mitaka

Add Midonet repository

cat <<EOF>/etc/yum.repos.d/midonet.repo
[midonet]
name=MidoNet
baseurl=http://builds.midonet.org/midonet-5.2/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-openstack-integration]
name=MidoNet OpenStack Integration
baseurl=http://builds.midonet.org/openstack-mitaka/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key

[midonet-misc]
name=MidoNet 3rd Party Tools and Libraries
baseurl=http://builds.midonet.org/misc/stable/el7/
enabled=1
gpgcheck=1
gpgkey=https://builds.midonet.org/midorepo.key
EOF

Clean repos cache and update the system

yum clean all
yum update

Install Midolman agent and java packages

yum install -y java-1.8.0-openjdk-headless midolman

Configure midolman.conf

vi /etc/midolman/midolman.conf

Add all nsdb nodes as zookeeper hosts

[zookeeper]
zookeeper_hosts = nsdb1:2181,nsdb2:2181,nsdb3:2181

Configure each gateway node with an appropiate flavor located at /etc/midolman/ folder, the have different hardware resources configured, use the one that better match your gateway host capabilities

mn-conf template-set -h local -t agent-gateway-medium
cp /etc/midolman/midolman-env.sh.gateway.medium /etc/midolman/midolman-env.sh

Grab the metadata shared secret located at nova.conf at any of your nova nodes

# egrep ^metadata_proxy_shared_secret /etc/nova/nova.conf 
metadata_proxy_shared_secret =2bfeb930a90d435d

Allow metadata trafic at iptables

iptables -I INPUT 1 -i metadata -j ACCEPT

Start and enable midolman agent service

systemctl enable midolman.service
systemctl start midolman.service

Configure encapsulation and register nodes
Enter to midonet CLI from a controller node

midonet-cli

Create the tunnel zone with VXLAN encapsulation

midonet> tunnel-zone create name tz type vxlan
tzone0
midonet> list tunnel-zone
tzone tzone0 name tz type vxlan

List hosts discovered by midonet, should be all the nodes where you configured midonet agents(midolman)

midonet> list host
host host0 name gateway2 alive true addresses fe80:0:0:0:0:11ff:fe00:1102,169.254.123.1,fe80:0:0:0:0:11ff:fe00:1101,127.0.0.1,0:0:0:0:0:0:0:1,192.168.200.176,fe80:0:0:0:5054:ff:fef9:b2a0,169.254.169.254,fe80:0:0:0:7874:d6ff:fe5b:dea8,192.168.100.227,fe80:0:0:0:5054:ff:fed9:9cc0,fe80:0:0:0:5054:ff:fe4a:e39b,192.168.1.86 flooding-proxy-weight 1 container-weight 1 container-limit no-limit enforce-container-limit false
host host1 name gateway1 alive true addresses 169.254.169.254,fe80:0:0:0:3cd1:23ff:feac:a3c2,192.168.1.87,fe80:0:0:0:5054:ff:fea8:da91,127.0.0.1,0:0:0:0:0:0:0:1,fe80:0:0:0:5054:ff:feec:92c1,192.168.200.232,fe80:0:0:0:0:11ff:fe00:1102,169.254.123.1,fe80:0:0:0:0:11ff:fe00:1101,192.168.100.141,fe80:0:0:0:5054:ff:fe20:30fb flooding-proxy-weight 1 container-weight 1 container-limit no-limit enforce-container-limit false
host host2 name compute1 alive true addresses fe80:0:0:0:0:11ff:fe00:1101,169.254.123.1,127.0.0.1,0:0:0:0:0:0:0:1,fe80:0:0:0:0:11ff:fe00:1102,192.168.100.173,fe80:0:0:0:5054:ff:fe06:161,fe80:0:0:0:5054:ff:fee3:eb48,192.168.200.251,fe80:0:0:0:5054:ff:fe8d:d22,192.168.1.93,169.254.169.254,fe80:0:0:0:48cb:adff:fe69:f07b flooding-proxy-weight 1 container-weight 1 container-limit no-limit enforce-container-limit false

Register each of the nodes at the VXLAN zone we created before

midonet> tunnel-zone tzone0 add member host host0 address 192.168.100.227
zone tzone0 host host0 address 192.168.100.227
midonet> tunnel-zone tzone0 add member host host1 address 192.168.100.141
zone tzone0 host host1 address 192.168.100.141
midonet> tunnel-zone tzone0 add member host host2 address 192.168.100.173
zone tzone0 host host2 address 192.168.100.173

Create Networks at Neutron
Create an external network

# neutron net-create ext-net --router:external
Created a new network:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| created_at            | 2016-07-03T14:47:30                  |
| description           |                                      |
| id                    | dc15245e-4391-4514-b489-8976373046a3 |
| is_default            | False                                |
| name                  | ext-net                              |
| port_security_enabled | True                                 |
| provider:network_type | midonet                              |
| router:external       | True                                 |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tags                  |                                      |
| tenant_id             | 2f7ee2716b3b4140be57b4a5b26401e3     |
| updated_at            | 2016-07-03T14:47:30                  |
+-----------------------+--------------------------------------+

Create an external subnet in the network we created before, use you own IP ranges to match your environment

# neutron subnet-create ext-net 192.168.200.0/24 --name ext-subnet \
  --allocation-pool start=192.168.200.225,end=192.168.200.240 \
  --disable-dhcp --gateway 192.168.200.1
Created a new subnet:
+-------------------+--------------------------------------------------------+
| Field             | Value                                                  |
+-------------------+--------------------------------------------------------+
| allocation_pools  | {"start": "192.168.200.225", "end": "192.168.200.240"} |
| cidr              | 192.168.200.0/24                                       |
| created_at        | 2016-07-03T14:50:46                                    |
| description       |                                                        |
| dns_nameservers   |                                                        |
| enable_dhcp       | False                                                  |
| gateway_ip        | 192.168.200.1                                          |
| host_routes       |                                                        |
| id                | 234dcc9a-2878-4799-b564-bf3a1bd52cad                   |
| ip_version        | 4                                                      |
| ipv6_address_mode |                                                        |
| ipv6_ra_mode      |                                                        |
| name              | ext-subnet                                             |
| network_id        | dc15245e-4391-4514-b489-8976373046a3                   |
| subnetpool_id     |                                                        |
| tenant_id         | 2f7ee2716b3b4140be57b4a5b26401e3                       |
| updated_at        | 2016-07-03T14:50:46                                    |
+-------------------+--------------------------------------------------------+

Create a tenant network and a subnet on it

# neutron net-create demo-net
Created a new network:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| created_at            | 2016-07-03T14:51:39                  |
| description           |                                      |
| id                    | 075ba699-dc4c-4625-8e0d-0a258a9aeb7d |
| name                  | demo-net                             |
| port_security_enabled | True                                 |
| provider:network_type | midonet                              |
| router:external       | False                                |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tags                  |                                      |
| tenant_id             | 2f7ee2716b3b4140be57b4a5b26401e3     |
| updated_at            | 2016-07-03T14:51:39                  |
+-----------------------+--------------------------------------+
# neutron subnet-create demo-net 10.0.20.0/24 --name demo-subnet 
Created a new subnet:
+-------------------+----------------------------------------------+
| Field             | Value                                        |
+-------------------+----------------------------------------------+
| allocation_pools  | {"start": "10.0.20.2", "end": "10.0.20.254"} |
| cidr              | 10.0.20.0/24                                 |
| created_at        | 2016-07-03T14:52:32                          |
| description       |                                              |
| dns_nameservers   |                                              |
| enable_dhcp       | True                                         |
| gateway_ip        | 10.0.20.1                                    |
| host_routes       |                                              |
| id                | b299d899-33a3-4bfa-aff4-fda071545bdf         |
| ip_version        | 4                                            |
| ipv6_address_mode |                                              |
| ipv6_ra_mode      |                                              |
| name              | demo-subnet                                  |
| network_id        | 075ba699-dc4c-4625-8e0d-0a258a9aeb7d         |
| subnetpool_id     |                                              |
| tenant_id         | 2f7ee2716b3b4140be57b4a5b26401e3             |
| updated_at        | 2016-07-03T14:52:32                          |
+-------------------+----------------------------------------------+

Create a tenant router

# neutron router-create router1
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| description           |                                      |
| external_gateway_info |                                      |
| id                    | 258942d8-9d82-4ebd-b829-c7bdfcc973f5 |
| name                  | router1                              |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | 2f7ee2716b3b4140be57b4a5b26401e3     |
+-----------------------+--------------------------------------+

Attach the tenant subnet interface we created before to the router

# neutron router-interface-add router1 demo-subnet
Added interface 06c85a56-368c-4d79-bbf0-4bb077f163e5 to router router1.

Set the external network as router gateway

# neutron router-gateway-set router1 ext-net
Set gateway for router router1

Now, you can create an instance at tenant network

# nova boot --flavor m1.tiny --image 80871834-29dd-4100-b038-f5f83f126204 --nic net-id=075ba699-dc4c-4625-8e0d-0a258a9aeb7d test1
+--------------------------------------+-----------------------------------------------------+
| Property                             | Value                                               |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                              |
| OS-EXT-AZ:availability_zone          |                                                     |
| OS-EXT-SRV-ATTR:host                 | -                                                   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000000a                                   |
| OS-EXT-STS:power_state               | 0                                                   |
| OS-EXT-STS:task_state                | scheduling                                          |
| OS-EXT-STS:vm_state                  | building                                            |
| OS-SRV-USG:launched_at               | -                                                   |
| OS-SRV-USG:terminated_at             | -                                                   |
| accessIPv4                           |                                                     |
| accessIPv6                           |                                                     |
| adminPass                            | q2Cq4kxePSLL                                        |
| config_drive                         |                                                     |
| created                              | 2016-07-03T15:46:19Z                                |
| flavor                               | m1.tiny (1)                                         |
| hostId                               |                                                     |
| id                                   | b8aa46f9-186c-4594-8428-f8dbb16a5e16                |
| image                                | cirros image (80871834-29dd-4100-b038-f5f83f126204) |
| key_name                             | -                                                   |
| metadata                             | {}                                                  |
| name                                 | test1                                               |
| os-extended-volumes:volumes_attached | []                                                  |
| progress                             | 0                                                   |
| security_groups                      | default                                             |
| status                               | BUILD                                               |
| tenant_id                            | 2f7ee2716b3b4140be57b4a5b26401e3                    |
| updated                              | 2016-07-03T15:46:20Z                                |
| user_id                              | a2482a91a1f14750b372445d28b07c75                    |
+--------------------------------------+-----------------------------------------------------+
# nova list
+--------------------------------------+-------+--------+------------+-------------+---------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks            |
+--------------------------------------+-------+--------+------------+-------------+---------------------+
| b8aa46f9-186c-4594-8428-f8dbb16a5e16 | test1 | ACTIVE | -          | Running     | demo-net=10.0.20.11 |
+--------------------------------------+-------+--------+------------+-------------+---------------------+

Ensure the instance gets IP and the metadata service is properly running

# nova console-log test1
...#Snipp from the output
Sending discover...
Sending select for 10.0.20.11...
Lease of 10.0.20.11 obtained, lease time 86400
cirros-ds 'net' up at 7.92
checking http://169.254.169.254/2009-04-04/instance-id
successful after 1/20 tries: up 8.22. iid=i-0000000a
...

If you login to the instance through VNC you should be able to ping another instances

Edge router configuration
Create a new router

# neutron router-create edge-router
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| description           |                                      |
| external_gateway_info |                                      |
| id                    | 5ecadb64-cb0d-4f95-a00e-aa1dd20a2012 |
| name                  | edge-router                          |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | 2f7ee2716b3b4140be57b4a5b26401e3     |
+-----------------------+--------------------------------------+

Attach the external subnet interface to the router

# neutron router-interface-add edge-router ext-subnet
Added interface e37f1986-c6b1-47f4-8268-02b837ceac17 to router edge-router.

Create an uplink network

# neutron net-create uplink-network --tenant_id admin --provider:network_type uplink
Created a new network:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| created_at            | 2016-07-03T14:57:15                  |
| description           |                                      |
| id                    | 77173ed4-6106-4515-af1c-3683897955f9 |
| name                  | uplink-network                       |
| port_security_enabled | True                                 |
| provider:network_type | uplink                               |
| router:external       | False                                |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tags                  |                                      |
| tenant_id             | admin                                |
| updated_at            | 2016-07-03T14:57:15                  |
+-----------------------+--------------------------------------+

Create a subnet in the uplink network

# neutron subnet-create --tenant_id admin --disable-dhcp --name uplink-subnet uplink-network 192.168.1.0/24
Created a new subnet:
+-------------------+--------------------------------------------------+
| Field             | Value                                            |
+-------------------+--------------------------------------------------+
| allocation_pools  | {"start": "192.168.1.2", "end": "192.168.1.254"} |
| cidr              | 192.168.1.0/24                                   |
| created_at        | 2016-07-03T15:06:28                              |
| description       |                                                  |
| dns_nameservers   |                                                  |
| enable_dhcp       | False                                            |
| gateway_ip        | 192.168.1.1                                      |
| host_routes       |                                                  |
| id                | 4e98e789-20d3-45fd-a3b5-9bcf02d8a832             |
| ip_version        | 4                                                |
| ipv6_address_mode |                                                  |
| ipv6_ra_mode      |                                                  |
| name              | uplink-subnet                                    |
| network_id        | 77173ed4-6106-4515-af1c-3683897955f9             |
| subnetpool_id     |                                                  |
| tenant_id         | admin                                            |
| updated_at        | 2016-07-03T15:06:28                              |
+-------------------+--------------------------------------------------+

Create a port for each of the gateway nodes, interface should match with the NIC you want to use for binding the gateway nodes and a IP address for the same purposes

# neutron port-create uplink-network --binding:host_id gateway1 --binding:profile type=dict interface_name=eth1 --fixed-ip ip_address=192.168.1.199
Created a new port:
+-----------------------+--------------------------------------------------------------------------------------+
| Field                 | Value                                                                                |
+-----------------------+--------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                 |
| allowed_address_pairs |                                                                                      |
| binding:host_id       | compute1                                                                             |
| binding:profile       | {"interface_name": "eth1"}                                                           |
| binding:vif_details   | {"port_filter": true}                                                                |
| binding:vif_type      | midonet                                                                              |
| binding:vnic_type     | normal                                                                               |
| created_at            | 2016-07-03T15:10:06                                                                  |
| description           |                                                                                      |
| device_id             |                                                                                      |
| device_owner          |                                                                                      |
| extra_dhcp_opts       |                                                                                      |
| fixed_ips             | {"subnet_id": "4e98e789-20d3-45fd-a3b5-9bcf02d8a832", "ip_address": "192.168.1.199"} |
| id                    | 7b4f54dd-2b41-42ba-9c5c-cda4640dc550                                                 |
| mac_address           | fa:16:3e:44:a8:c9                                                                    |
| name                  |                                                                                      |
| network_id            | 77173ed4-6106-4515-af1c-3683897955f9                                                 |
| port_security_enabled | True                                                                                 |
| security_groups       | 0cf3e33e-dbd6-4b42-a0bd-6679b5eed4e1                                                 |
| status                | ACTIVE                                                                               |
| tenant_id             | 2f7ee2716b3b4140be57b4a5b26401e3                                                     |
| updated_at            | 2016-07-03T15:10:06                                                                  |
+-----------------------+--------------------------------------------------------------------------------------+

Attach each of the ports to the edge router

# neutron router-interface-add edge-router port=7b4f54dd-2b41-42ba-9c5c-cda4640dc550
Added interface 7b4f54dd-2b41-42ba-9c5c-cda4640dc550 to router edge-router.

At this point you have to decide if use border routers with BGP enabled or static routes.
Use one of the following links to configure your use case:
https://docs.midonet.org/docs/latest/operations-guide/content/bgp_uplink_configuration.html
https://docs.midonet.org/docs/latest/operations-guide/content/static_setup.html

Issues I faced during configuration of Midonet

Midolman agent don't start:
It was caused because midolman-env.sh file has more RAM configured as the one of my server.
Edit the file to match your server resources

# egrep ^MAX_HEAP_SIZE /etc/midolman/midolman-env.sh
MAX_HEAP_SIZE="2048M"



Instances doesn't boot with the following error:

could not open /dev/net/tun: Permission denied

I had to remove br-tun bridges at ovs, if not, ovs locks the device and midolman cannot create the tunnel beetwen compute nodes and gateway nodes.

ovs-vsctl del-br br-tun



This post is my experience integrating Midonet into OpenStack, maybe some things are not correct, if you find any issue, please advise me to fix it.
Regards, Eduardo Gonzalez

1 2 3 18
%d bloggers like this: