Deploy OpenStack Designate with Kolla

During Ocata release, OpenStack DNS-as-a-Service (Designate) support was implemented in OpenStack kolla project.

This post will guide you through a basic deployment and tests of designate service.

Install required dependencies and tools for kolla-ansible and designate.

# yum install -y epel-release
# yum install -y python-pip python-devel libffi-devel gcc openssl-devel ansible ntp wget bind-utils
# pip install -U pip

Install Docker and downgrade to 1.12.6. At the time of writing this post libvirt had issues to connect with D-Bus due SElinux issues with Docker 1.13.

# curl -sSL https://get.docker.io | bash
# yum downgrade docker-engine-1.12.6 docker-engine-selinux-1.12.6
# yum install -y python-docker-py

Configure Docker daemon to allow insecure-registry (Use the IP where your remote registry will be located).

# mkdir -p /etc/systemd/system/docker.service.d
# tee /etc/systemd/system/docker.service.d/kolla.conf <<-'EOF'
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --insecure-registry 172.28.128.3:4000
MountFlags=shared
EOF

Reload systemd daemons and start/stop/disable/enable the following services.

# systemctl daemon-reload
# systemctl stop libvirtd
# systemctl disable libvirtd
# systemctl enable ntpd docker
# systemctl start ntpd docker

Download Ocata registry created in tarballs.openstack.org, skip this step if images used are custom builds or downloaded from DockerHub.
Create kolla registry from downloaded tarball.

# wget https://tarballs.openstack.org/kolla/images/centos-binary-registry-ocata.tar.gz
# mkdir /opt/kolla_registry
# sudo tar xzf centos-binary-registry-ocata.tar.gz -C /opt/kolla_registry
# docker run -d -p 4000:5000 --restart=always -v /opt/kolla_registry/:/var/lib/registry --name registry registry:2

Install kolla-ansible.

# pip install kolla-ansible
# cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/kolla/
# cp /usr/share/kolla-ansible/ansible/inventory/* .

Configure kolla globals.yml configuration file with the following content.
Change values when necessary (IP addresses, interface names).
This is a sample minimal configuration.

# vi /etc/kolla/globals.yml
---
kolla_internal_vip_address: "172.28.128.10"
kolla_base_distro: "centos"
kolla_install_type: "binary"
docker_registry: "172.28.128.3:4000"
docker_namespace: "lokolla"
network_interface: "enp0s8"
neutron_external_interface: "enp0s9"

Configure designate options in globals.yml.
dns_interface must be network reachable from nova instances if internal DNS resolution is needed.

enable_designate: "yes"
dns_interface: "enp0s8"
designate_backend: "bind9"
designate_ns_record: "sample.openstack.org"

Configure inventory, add the nodes in their respective groups.

# vi ~/multinode

Generate passwords.

# kolla-genpwd

Ensure the environment is ready to deploy with prechecks.
Until prechecks does not succeed do not start deployment.
Fix what is necessary.

# kolla-ansible prechecks -i ~/multinode

Pull Docker images on the servers, this can be skipped because will be made in deploy step, but doing it first will ensure all the nodes have the images you need and will minimize the deployment time.

# kolla-ansible pull -i ~/multinode

Deploy kolla-ansible and do a woot for kolla 😉

# kolla-ansible deploy -i ~/multinode

Create credentials file and source it.

# kolla-ansible post-deploy -i ~/multinode
# source /etc/kolla/admin-openrc.sh

Check that all containers are running and none of them are restarting or exiting.

# docker ps -a --filter status=exited --filter status=restarting
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Install required python clients

# pip install python-openstackclient python-designateclient python-neutronclient

Execute a base OpenStack configuration (public and internal networks, cirros image).
Do no execute this script if custom networks are going to be used.

# sh /usr/share/kolla-ansible/init-runonce

Create a sample designate zone.

# openstack zone create --email admin@sample.openstack.org sample.openstack.org.
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| action         | CREATE                               |
| attributes     |                                      |
| created_at     | 2017-02-22T13:14:39.000000           |
| description    | None                                 |
| email          | admin@sample.openstack.org           |
| id             | 4a44b0c9-bd07-4f5c-8908-523f453f269d |
| masters        |                                      |
| name           | sample.openstack.org.                |
| pool_id        | 85d18aec-453e-45ae-9eb3-748841a1da12 |
| project_id     | 937d49af6cfe4ef080a79f9a833d7c7d     |
| serial         | 1487769279                           |
| status         | PENDING                              |
| transferred_at | None                                 |
| ttl            | 3600                                 |
| type           | PRIMARY                              |
| updated_at     | None                                 |
| version        | 1                                    |
+----------------+--------------------------------------+

Configure designate sink to make use of the previously created zone, sink will need zone_id to automatically create neutron and nova records into designate.

# mkdir -p /etc/kolla/config/designate/designate-sink/
# vi /etc/kolla/config/designate/designate-sink.conf
[handler:nova_fixed]
zone_id = 4a44b0c9-bd07-4f5c-8908-523f453f269d
[handler:neutron_floatingip]
zone_id = 4a44b0c9-bd07-4f5c-8908-523f453f269d

After configure designate-sink.conf, reconfigure designate to make use of this configuration.

# kolla-ansible reconfigure -i ~/multinode --tags designate

List networks.

# neutron net-list
+--------------------------------------+----------+----------------------------------+--------------------------------------------------+
| id                                   | name     | tenant_id                        | subnets                                          |
+--------------------------------------+----------+----------------------------------+--------------------------------------------------+
| 3b56c605-5a01-45be-9ed6-e4c3285e4366 | demo-net | 937d49af6cfe4ef080a79f9a833d7c7d | 7f28f050-77b2-426e-b963-35b682077993 10.0.0.0/24 |
| 6954d495-fb8c-4b0b-98a9-9672a7f65b7c | public1  | 937d49af6cfe4ef080a79f9a833d7c7d | 9bd9feca-40a7-4e82-b912-e51b726ad746 10.0.2.0/24 |
+--------------------------------------+----------+----------------------------------+--------------------------------------------------+

Update the network with a dns_domain.

# neutron net-update 3b56c605-5a01-45be-9ed6-e4c3285e4366 --dns_domain sample.openstack.org.
Updated network: 3b56c605-5a01-45be-9ed6-e4c3285e4366

Ensure dns_domain is properly applied.

# neutron net-show 3b56c605-5a01-45be-9ed6-e4c3285e4366
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2017-02-22T13:13:06Z                 |
| description               |                                      |
| dns_domain                | sample.openstack.org.                |
| id                        | 3b56c605-5a01-45be-9ed6-e4c3285e4366 |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1450                                 |
| name                      | demo-net                             |
| port_security_enabled     | True                                 |
| project_id                | 937d49af6cfe4ef080a79f9a833d7c7d     |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 27                                   |
| revision_number           | 6                                    |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 7f28f050-77b2-426e-b963-35b682077993 |
| tags                      |                                      |
| tenant_id                 | 937d49af6cfe4ef080a79f9a833d7c7d     |
| updated_at                | 2017-02-22T13:25:16Z                 |
+---------------------------+--------------------------------------+

Create a new instance in the previously updated network.

# openstack server create \
    --image cirros \
    --flavor m1.tiny \
    --key-name mykey \
    --nic net-id=3b56c605-5a01-45be-9ed6-e4c3285e4366 \
    demo1

Once the instance is ACTIVE, check the IP associated.

# openstack server list
+--------------------------------------+-------+--------+-------------------+------------+
| ID                                   | Name  | Status | Networks          | Image Name |
+--------------------------------------+-------+--------+-------------------+------------+
| d483e4ee-58c2-4e1e-9384-85174630428e | demo1 | ACTIVE | demo-net=10.0.0.3 | cirros     |
+--------------------------------------+-------+--------+-------------------+------------+

List records in the designate zone.
As you can see there is a record in designate associated with the instance IP.

# openstack recordset list sample.openstack.org.
+--------------------------------------+----------------------------------+------+-------------------------------------------+--------+--------+
| id                                   | name                             | type | records                                   | status | action |
+--------------------------------------+----------------------------------+------+-------------------------------------------+--------+--------+
| 4f70531e-c325-4ffd-a8d3-8172bd5163b8 | sample.openstack.org.            | SOA  | sample.openstack.org.                     | ACTIVE | NONE   |
|                                      |                                  |      | admin.sample.openstack.org. 1487770304    |        |        |
|                                      |                                  |      | 3586 600 86400 3600                       |        |        |
| a9a09c5f-ccf1-4b52-8400-f36e8faa9549 | sample.openstack.org.            | NS   | sample.openstack.org.                     | ACTIVE | NONE   |
| aa6cd25d-186e-425b-9153-699d8b0811de | 10-0-0-3.sample.openstack.org.   | A    | 10.0.0.3                                  | ACTIVE | NONE   |
| 713650a5-a45e-470b-9539-74e110b15115 | demo1.None.sample.openstack.org. | A    | 10.0.0.3                                  | ACTIVE | NONE   |
| 6506e6f6-f535-45eb-9bfb-4ac1f16c5c9b | demo1.sample.openstack.org.      | A    | 10.0.0.3                                  | ACTIVE | NONE   |
+--------------------------------------+----------------------------------+------+-------------------------------------------+--------+--------+

Validate that designate resolves the DNS record.
You can use designate mDNS service or directly to bind9 servers to validate the test.

# dig +short -p 5354 @172.28.128.3 demo1.sample.openstack.org. A
10.0.0.3
# dig +short -p 53 @172.28.128.3 demo1.sample.openstack.org. A
10.0.0.3

If you find any issue with designate in kolla-ansible or kolla, please fill a bug https://bugs.launchpad.net/kolla-ansible/+filebug

Regards,
Eduardo Gonzalez

OpenStack Keystone Zero-Downtime upgrade process (N to O)

This blog post will show Keystone upgrade procedure from OpenStack Newton to Ocata release with zero-downtime.

In the case of doing this in production, please read release notes, ensure a proper configuration, do database backups and test the upgrade a thousand times.

Keystone upgrade will need to stop one node in order to use it as upgrade server.
In the case of a PoC this is not an issue, but in a production environment, Keystone loads may be intensive and stopping a node for a while may decrease other nodes performance more than expected.
For this reason I prefer orchestrate the upgrade from an external Docker container. With this method all nodes will be fully running almost all the time.

  • New container won’t start any service, just will sync the database schema with new Keystone version avoiding stop a node to orchestrate the upgrade.
  • The Docker image is provided by OpenStack Kolla project, if already using Kolla this upgrade won’t be needed as kolla-ansible already provide an upgrade method.
  • At the moment of writing of this blog, Ocata packages were not released into stable repositories. For this reason I use DLRN repositories.
  • If Ocata is released please do not use DLRN, use stable packages instead.
  • Use stable Ocata Docker image if available with tag 4.0.x and will avoid repository configuration and package upgrades.
  • NOTE: Upgrade may need more steps depending of your own configuration, i.e, if using fernet token more steps are necessary during the upgrade.
  • All Keystone nodes are behind HAproxy.

 

Prepare the upgrade

Start Keystone Docker container with host networking (needed to communicate with database nodes directly) and root user (needed to install packages).

(host)# docker run -ti --net host -u 0 kolla/centos-binary-keystone:3.0.2 bash

Download Delorean CentOS trunk repositories

(keystone-upgrade)# curl -Lo /etc/yum.repos.d/delorean.repo http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tested/delorean.repo
(keystone-upgrade)# curl -Lo /etc/yum.repos.d/delorean-deps.repo http://trunk.rdoproject.org/centos7/delorean-deps.repo

Disable Newton repository

(keystone-upgrade)# yum-config-manager --disable centos-openstack-newton

Ensure Newton repository is not longer used by the system

(keystone-upgrade)# yum repolist | grep -i openstack
delorean                        delorean-openstack-glance-0bf9d805886c2  565+255

Update all packages in the Docker container to bump keystone version to Ocata.

(keystone-upgrade)# yum clean all && yum update -y

Configure keystone.conf file, this are my settings. Review you configuration and ensure all is correctly, otherwise may cause issues in the database.
An important option is default_domain_id, this value is for backward compatible with users created under default domain.

(keystone-upgrade)# egrep ^[^#] /etc/keystone/keystone.conf 
[DEFAULT]
debug = False
log_file = /var/log/keystone/keystone.log
secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO
[database]
connection = mysql+pymysql://keystone:ickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB@192.168.100.10:3306/keystone
max_retries = -1
[cache]
backend = oslo_cache.memcache_pool
enabled = True
memcache_servers = 192.168.100.215:11211,192.168.100.170:11211
[identity]
default_domain_id = default
[token]
provider = uuid

Check migrate version in the database.
As you will notice, contract/data_migrate/expand are in the same version

(mariadb)# mysql -ukeystone -pickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB -h192.168.100.10 keystone -e "select * from migrate_version;" 
Warning: Using a password on the command line interface can be insecure.
+-----------------------+--------------------------------------------------------------------------+---------+
| repository_id         | repository_path                                                          | version |
+-----------------------+--------------------------------------------------------------------------+---------+
| keystone              | /usr/lib/python2.7/site-packages/keystone/common/sql/migrate_repo        |     109 |
| keystone_contract     | /usr/lib/python2.7/site-packages/keystone/common/sql/contract_repo       |       4 |
| keystone_data_migrate | /usr/lib/python2.7/site-packages/keystone/common/sql/data_migration_repo |       4 |
| keystone_expand       | /usr/lib/python2.7/site-packages/keystone/common/sql/expand_repo         |       4 |
+-----------------------+--------------------------------------------------------------------------+---------+

Before start upgrading the database schema, you will need add SUPER privileges in the database to keystone user or set log_bin_trust_function_creators to True.
In my opinion is safer set the value to True, I don’t want keystone with SUPER privileges.

(mariadb)# mysql -uroot -pnkLMrBibfMTRqOGBAP3UAxdO4kOFfEaPptGM5UDL -h192.168.100.10 keystone -e "set global log_bin_trust_function_creators=1;"

Now use Rally, tempest or some tool to test/benchmarch keystone service during upgrade.
If don’t want to use one of those tools, just use this for command.

(host)# for i in {1000..6000} ; do openstack user create --password $i $i; done

 

Start Upgrade

Check database status before upgrade using Doctor, this may raise issues in the configuration. Some of them may be ignored(Please, ensure is not an issue before ignoring).
As example, I’m not using fernet tokens and errors appear about missing folder.

(keystone-upgrade)# keystone-manage doctor

Remove obsoleted tokens

(keystone-upgrade)# keystone-manage token_flush

Now, expand the database schema to latest version, in keystone.log can see the status.
Check in the logs if some error is raised before jump to the next step.

(keystone-upgrade)# keystone-manage db_sync --expand


2017-01-31 13:42:02.772 306 INFO migrate.versioning.api [-] 4 -> 5... 
2017-01-31 13:42:03.004 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.005 306 INFO migrate.versioning.api [-] 5 -> 6... 
2017-01-31 13:42:03.310 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.310 306 INFO migrate.versioning.api [-] 6 -> 7... 
2017-01-31 13:42:03.670 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.671 306 INFO migrate.versioning.api [-] 7 -> 8... 
2017-01-31 13:42:03.984 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:03.985 306 INFO migrate.versioning.api [-] 8 -> 9... 
2017-01-31 13:42:04.185 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:04.185 306 INFO migrate.versioning.api [-] 9 -> 10... 
2017-01-31 13:42:07.202 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:07.202 306 INFO migrate.versioning.api [-] 10 -> 11... 
2017-01-31 13:42:07.481 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:07.481 306 INFO migrate.versioning.api [-] 11 -> 12... 
2017-01-31 13:42:11.334 306 INFO migrate.versioning.api [-] done
2017-01-31 13:42:11.334 306 INFO migrate.versioning.api [-] 12 -> 13... 
2017-01-31 13:42:11.560 306 INFO migrate.versioning.api [-] done

After expand the database, migrate it to latest version.
Ensure there are not errors in Keystone logs.

(keystone-upgrade)# keystone-manage db_sync --migrate

#keystone.log
2017-01-31 13:42:58.771 314 INFO migrate.versioning.api [-] 4 -> 5... 
2017-01-31 13:42:58.943 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:58.943 314 INFO migrate.versioning.api [-] 5 -> 6... 
2017-01-31 13:42:59.143 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.143 314 INFO migrate.versioning.api [-] 6 -> 7... 
2017-01-31 13:42:59.340 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.341 314 INFO migrate.versioning.api [-] 7 -> 8... 
2017-01-31 13:42:59.698 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.699 314 INFO migrate.versioning.api [-] 8 -> 9... 
2017-01-31 13:42:59.852 314 INFO migrate.versioning.api [-] done
2017-01-31 13:42:59.852 314 INFO migrate.versioning.api [-] 9 -> 10... 
2017-01-31 13:43:00.135 314 INFO migrate.versioning.api [-] done
2017-01-31 13:43:00.135 314 INFO migrate.versioning.api [-] 10 -> 11... 
2017-01-31 13:43:00.545 314 INFO migrate.versioning.api [-] done
2017-01-31 13:43:00.545 314 INFO migrate.versioning.api [-] 11 -> 12... 
2017-01-31 13:43:00.703 314 INFO migrate.versioning.api [-] done
2017-01-31 13:43:00.703 314 INFO migrate.versioning.api [-] 12 -> 13... 
2017-01-31 13:43:00.854 314 INFO migrate.versioning.api [-] done

Now, see migrate_version table, you will notice that expand and data_migrate are in the latest version, but contract still in the previous version.

(mariadb)# mysql -ukeystone -pickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB -h192.168.100.10 keystone -e "select * from migrate_version;"
+-----------------------+--------------------------------------------------------------------------+---------+
| repository_id         | repository_path                                                          | version |
+-----------------------+--------------------------------------------------------------------------+---------+
| keystone              | /usr/lib/python2.7/site-packages/keystone/common/sql/migrate_repo        |     109 |
| keystone_contract     | /usr/lib/python2.7/site-packages/keystone/common/sql/contract_repo       |       4 |
| keystone_data_migrate | /usr/lib/python2.7/site-packages/keystone/common/sql/data_migration_repo |      13 |
| keystone_expand       | /usr/lib/python2.7/site-packages/keystone/common/sql/expand_repo         |      13 |
+-----------------------+--------------------------------------------------------------------------+---------+

 

Every Keystone node, one by one

Go to keystone nodes.
Stop Keystone services, in my case using wsgi inside Apache

(keystone_nodes)# systemctl stop httpd

Configure Ocata repositories as made in the Docker container.
Update packages, if you have Keystone sharing the node with other OpenStack service, do not update all packages as it will break other services.
Update only required packages.

(keystone_nodes)# yum clean all && yum update -y

Configure Keystone configuration file to the desired state. Your configuration may change.

(keystone_nodes)# egrep ^[^#] /etc/keystone/keystone.conf 
[DEFAULT]
debug = False
log_file = /var/log/keystone/keystone.log
secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO
[database]
connection = mysql+pymysql://keystone:ickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB@192.168.100.10:3306/keystone
max_retries = -1
[cache]
backend = oslo_cache.memcache_pool
enabled = True
memcache_servers = 192.168.100.215:11211,192.168.100.170:11211
[identity]
default_domain_id = default
[token]
provider = uuid

Start Keystone service.

(keystone_nodes)# systemctl start httpd

 

Finish Upgrade

After all the nodes are updated to latest version (please ensure all nodes are using latest packages, if not will fail).
Contract Keystone database schema.
Look at keystone.log for errors.

(keystone-upgrade)# keystone-manage db_sync --contract

keystone.log

2017-01-31 13:57:52.164 322 INFO migrate.versioning.api [-] 4 -> 5... 
2017-01-31 13:57:52.379 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:52.379 322 INFO migrate.versioning.api [-] 5 -> 6... 
2017-01-31 13:57:52.969 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:52.969 322 INFO migrate.versioning.api [-] 6 -> 7... 
2017-01-31 13:57:53.462 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:53.462 322 INFO migrate.versioning.api [-] 7 -> 8... 
2017-01-31 13:57:53.793 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:53.793 322 INFO migrate.versioning.api [-] 8 -> 9... 
2017-01-31 13:57:53.957 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:53.957 322 INFO migrate.versioning.api [-] 9 -> 10... 
2017-01-31 13:57:54.111 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:54.112 322 INFO migrate.versioning.api [-] 10 -> 11... 
2017-01-31 13:57:54.853 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:54.853 322 INFO migrate.versioning.api [-] 11 -> 12... 
2017-01-31 13:57:56.727 322 INFO migrate.versioning.api [-] done
2017-01-31 13:57:56.728 322 INFO migrate.versioning.api [-] 12 -> 13... 
2017-01-31 13:57:59.529 322 INFO migrate.versioning.api [-] done

Now if we look at migrate_version table, will see that contract version is latest and match with the other version (Ensure all are in the same version).
This means the database upgrade has been successfully implemented.

(mariadb)# mysql -ukeystone -pickvaHC9opkwbz8z8sy28aLiFNezc7Z6Fm34frcB -h192.168.100.10 keystone -e "select * from migrate_version;"
+-----------------------+--------------------------------------------------------------------------+---------+
| repository_id         | repository_path                                                          | version |
+-----------------------+--------------------------------------------------------------------------+---------+
| keystone              | /usr/lib/python2.7/site-packages/keystone/common/sql/migrate_repo        |     109 |
| keystone_contract     | /usr/lib/python2.7/site-packages/keystone/common/sql/contract_repo       |      13 |
| keystone_data_migrate | /usr/lib/python2.7/site-packages/keystone/common/sql/data_migration_repo |      13 |
| keystone_expand       | /usr/lib/python2.7/site-packages/keystone/common/sql/expand_repo         |      13 |
+-----------------------+--------------------------------------------------------------------------+---------+

Remove log_bin_trust_function_creators value.

(mariadb)# mysql -uroot -pnkLMrBibfMTRqOGBAP3UAxdO4kOFfEaPptGM5UDL -h192.168.100.10 keystone -e "set global log_bin_trust_function_creators=0;"

After finish the upgrade, Rally tests should not have any error. **If using HAproxy for load balance Keystone service, some errors may happen due a connection drop while stopping Keystone service and re-balance to other Keystone node. This can be avoided putting the node to update in Maintenance Mode in HAproxy backend.

Have to thank Keystone team in #openstack-keystone IRC channel for the help provided with a couple of issues.

Regards, Eduardo Gonzalez

To Conditional or To Skip, that is the Ansible question

Have you ever think about if an Ansible task should be skipped with a conditional or without (hidden skip)?.
Well, this post will analyse both methods.

Let’s use a example to create the same result and analyse the both methods:

In OpenStack Kolla we found that sometimes operators need to customise policy.json files. That’s fine, but the problem is that policy.json files are installed with the services by default and we don’t need/want to maintain policy.json files in our repository because for sure will cause bugs from outdated policy files in the future.

What’s the proposed solution to this? Allow operators use their own policy files only when their files exists in a custom configuration folder. If custom files are not present, default policy.json files are already present as part of the software installation. ( Actually this change is under review )

To Conditional method

Code snippet:

  - name: Check if file exists
    stat:
      path: "/tmp/custom_file.json"
    register: check_custom_file_exist

  - name: Copy custom policy when exist
    template:
      src: "/tmp/custom_file.json"
      dest: "/tmp/destination_file.json"
    when: "{{ check_custom_file_exist.stat.exists }}"

The first task checks if the file is present and register the stat result.
The second task, copy the file only when the registered result of the previous task is True. (exists == True)

Outputs the following when the file is not present:

PLAY [localhost] ***************************************************************

TASK [Check if file exists] ****************************************************
ok: [localhost]

TASK [Copy custom policy when exist] *******************************************
skipping: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0  

We can see the copy file task is skipped with a skipping message.

To Skip method

Code snippet:

  - name: Copy custom policy when exist
    template:
      src: "{{ item }}"
      dest: "/tmp/destination_file.json"
    with_first_found:
    - files:
      - custom_file.json
      skip: True

This playbook contains a single task, this task will use the first found file in a list of files. If no file is present will skip the task.

Output from this execution:

PLAY [localhost] ***************************************************************

TASK [Copy custom policy when exist] *******************************************

PLAY RECAP *********************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=0 

We can see that no task is executed when custom files are not present, no output from the task (hidden skip).

Analysis and own opinion

Both methods do the same, both copy custom files when are present and both skip copy task when are not present.
What are the differences between both methods?

To_skip method is simpler to read and unified in a single task, to_conditional is created within two tasks.
To_conditional method takes longer to be executed as it has to check the existence of a file and then evaluate a conditional.

You may think that to_skip method is better than to_conditional method, that’s right in terms of code syntax and execution times. But…
As both, operator and infrastructure developer, I always use to_conditional method because when I’m deploying something new, I want to know what is executed and what not. In to_skip method you don’t know because there is no output provided from the task(not really true) but in to_conditional method it clearly says Skipping.

Execution times are not a problem in most use cases, as is not commonly used this kind of tasks in CM systems, only a few tasks will need this type of logic.

Regards, Eduardo Gonzalez

1 2 3 32
%d bloggers like this: