Working with affinity/anti-affinity groups OpenStack

In a previous post, you learned how to segregate resources with Availability Zones and Host Aggregates, those methods allows the end user to specify where and on which types of resources their instances should be running.

At this post, you will learn how specify to nova where nova-scheduler should schedule your instances based on two policies. These policies define if instances should share the same hypervisor (affinity rule) or if not depending of user needs(anti-affinity rule).

First, you need to modify nova.conf and allow nova-scheduler to filter based on affinity rules. Add ServerGroupAntiAffinityFilter and ServerGroupAffinityFilter filters to scheduler default filter option.

# vi /etc/nova.conf

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

Restart nova-scheduler to apply changes

systemctl restart openstack-nova-scheduler

Once nova-scheduler has been restarted, we can create a group of servers based on affinity policy (All instances at this group will be launched in the same hypervisor)

nova server-group-create instancestogethergroup affinity
+--------------------------------------+------------------------+---------------+---------+----------+
| Id                                   | Name                   | Policies      | Members | Metadata |
+--------------------------------------+------------------------+---------------+---------+----------+
| 27abe662-c37e-431c-9715-0d2137fc5519 | instancestogethergroup | [u'affinity'] | []      | {}       |
+--------------------------------------+------------------------+---------------+---------+----------+

Now create two instances, add --hint group=GROUP-ID option to specify the group where instances will be members.

nova boot --image a6d7a606-f725-480a-9b1b-7b3ae39b93d4 --flavor m1.tiny --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df --hint group=27abe662-c37e-431c-9715-0d2137fc5519 affinity1
nova boot --image a6d7a606-f725-480a-9b1b-7b3ae39b93d4 --flavor m1.tiny --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df --hint group=27abe662-c37e-431c-9715-0d2137fc5519 affinity2

Ensure the instances are properly mapped to the group.

nova server-group-get 27abe662-c37e-431c-9715-0d2137fc5519 
+--------------------------------------+------------------------+---------------+------------------------------------------------------------------------------------+----------+
| Id                                   | Name                   | Policies      | Members                                                                            | Metadata |
+--------------------------------------+------------------------+---------------+------------------------------------------------------------------------------------+----------+
| 27abe662-c37e-431c-9715-0d2137fc5519 | instancestogethergroup | [u'affinity'] | [u'b8b72a0a-c981-430e-a909-13d23d928655', u'8affefff-0072-47e3-8d11-2ddf26e48b82'] | {}       |
+--------------------------------------+------------------------+---------------+------------------------------------------------------------------------------------+----------+

Once instances are running, ensure they share the same hypervisor as we specify in the affinity policy.

# nova show affinity1 | grep hypervisor_hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute2az
# nova show affinity2 | grep hypervisor_hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute2az  

Now we create an anti-affinity policy based group.

nova server-group-create farinstancesgroup anti-affinity
+--------------------------------------+-------------------+--------------------+---------+----------+
| Id                                   | Name              | Policies           | Members | Metadata |
+--------------------------------------+-------------------+--------------------+---------+----------+
| 988a9fd2-3a97-481e-b083-fee36b33009d | farinstancesgroup | [u'anti-affinity'] | []      | {}       |
+--------------------------------------+-------------------+--------------------+---------+----------+

Launch two instances and attach them to the anti-affinity group.

nova boot --image a6d7a606-f725-480a-9b1b-7b3ae39b93d4 --flavor m1.tiny --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df --hint group=988a9fd2-3a97-481e-b083-fee36b33009d anti-affinity1
nova boot --image a6d7a606-f725-480a-9b1b-7b3ae39b93d4 --flavor m1.tiny --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df --hint group=988a9fd2-3a97-481e-b083-fee36b33009d anti-affinity2

Ensure the instances are in the anti-affinity group

nova server-group-get 988a9fd2-3a97-481e-b083-fee36b33009d 
+--------------------------------------+-------------------+--------------------+------------------------------------------------------------------------------------+----------+
| Id                                   | Name              | Policies           | Members                                                                            | Metadata |
+--------------------------------------+-------------------+--------------------+------------------------------------------------------------------------------------+----------+
| 988a9fd2-3a97-481e-b083-fee36b33009d | farinstancesgroup | [u'anti-affinity'] | [u'cfb45193-9a7c-436f-ac2d-59a7a9a854ae', u'25dc8671-0c9a-4774-90cf-7394380f91ef'] | {}       |
+--------------------------------------+-------------------+--------------------+------------------------------------------------------------------------------------+----------+

Once instances are running, ensure they are in different hypervisors as we specify in the anti-affinity policy.

# nova show anti-affinity1 | grep hypervisor_hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute2az
# nova show anti-affinity2 | grep hypervisor_hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute1az   

Regards, Eduardo Gonzalez

Migrate from keystone v2.0 to keystone v3 OpenStack Liberty

Migrate from keystone v2.0 to v3 isn’t as easy like just changing the endpoints at the database, every service must be configured to authenticate against keystone v3.

I’ve been working on that the past few days looking for a method, with the purpose of facilitate operators life’s who need this kind of migration.
I have to thank Adam Young work, i followed his blog to make a first configuration idea, after that, i configured all core services to make use of keystone v3.
If you want to check Adam’s blog, follow this link: http://adam.younglogic.com/2015/05/rdo-v3-only/

I used OpenStack Liberty installed with RDO packstack over CentOS 7 servers.
The example IP used is 192.168.200.168, use your own according your needs.
Password used for all services is PASSWD1234, use your own password, you can locate your passwords at the packstack answer file.

Horizon

First we configure Horizon with keystone v3 as below:

vi /etc/openstack-dashboard/local_settings

OPENSTACK_API_VERSIONS = {
    "identity": 3
}

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

keystone

Check your current identity endpoints

mysql  --user keystone_admin --password=PASSWD1234  keystone -e "select interface, url from endpoint where service_id =  (select id from service where service.type = 'identity');"

Change your public, admin and internal endpoints with v3 at the end, instead of v2.0

mysql  --user keystone_admin --password=PASSWD1234   keystone -e "update endpoint set   url  = 'http://192.168.200.178:5000/v3' where  interface ='internal' and  service_id =  (select id from service where service.type = 'identity');"

mysql  --user keystone_admin --password=PASSWD1234   keystone -e "update endpoint set   url  = 'http://192.168.200.178:5000/v3' where  interface ='public' and  service_id =  (select id from service where service.type = 'identity');"

mysql  --user keystone_admin --password=PASSWD1234   keystone -e "update endpoint set   url  = 'http://192.168.200.178:35357/v3' where  interface ='admin' and  service_id =  (select id from service where service.type = 'identity');"

Ensure the endpoints are properly created

mysql  --user keystone_admin --password=KEYSTONE_DB_PW   keystone -e "select interface, url from endpoint where service_id =  (select id from service where service.type = 'identity');"

Create a source file or edit keystonerc_admin with the following data

vi v3_keystone

unset OS_SERVICE_TOKEN
export OS_USERNAME=admin
export OS_PASSWORD=PASSWD1234
export OS_AUTH_URL=http://192.168.200.178:5000/v3
export OS_PROJECT_NAME=admin
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(keystone_admin)]\$ '
export OS_IDENTITY_API_VERSION=3

Comment both pipelines, in public_api and admin_api

vi /usr/share/keystone/keystone-dist-paste.ini

[pipeline:public_api]
# The last item in this pipeline must be public_service or an equivalent
# application. It cannot be a filter.
#pipeline = sizelimit url_normalize request_id build_auth_context token_auth admin_token_auth json_body ec2_extension user_crud_extension public_service

[pipeline:admin_api]
# The last item in this pipeline must be admin_service or an equivalent
# application. It cannot be a filter.
#pipeline = sizelimit url_normalize request_id build_auth_context token_auth admin_token_auth json_body ec2_extension s3_extension crud_extension admin_service

Comment v2.0 entries in composite:main and admin sections.

[composite:main]
use = egg:Paste#urlmap
#/v2.0 = public_api
/v3 = api_v3
/ = public_version_api

[composite:admin]
use = egg:Paste#urlmap
#/v2.0 = admin_api
/v3 = api_v3
/ = admin_version_api

Restart httpd to apply changes

systemctl restart httpd

Check whether keystone and horizon are properly working
The command below should prompt an user list, if not, check configuration in previous steps

openstack user list

Glance

Edit the following files, with the content below:

vi /etc/glance/glance-api.conf 
vi /etc/glance/glance-registry.conf 
vi /etc/glance/glance-cache.conf 

[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = glance
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Comment the following lines:

#auth_host=127.0.0.1
#auth_port=35357
#auth_protocol=http
#identity_uri=http://192.168.200.178:35357
#admin_user=glance
#admin_password=PASSWD1234
#admin_tenant_name=services

Those lines, should be commented in all the other OpenStack core services at keystone_authtoken section

Edit the files below and comment the lines inside keystone_authtoken section.

vi /usr/share/glance/glance-api-dist.conf 
vi /usr/share/glance/glance-registry-dist.conf 

[keystone_authtoken]
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http

Restart glance services

openstack-service restart glance

Ensure glance service is working

openstack image list

Nova

Edit the file below and comment the lines inside keystone_authtoken

vi /usr/share/nova/nova-dist.conf

[keystone_authtoken]
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http

Edit nova.conf and add the auth content inside keystone_authtoken, don’t forget to comment the lines related to the last auth method, which were commented in glance section.

vi /etc/nova/nova.conf

[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = nova
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Configure nova authentication against neutron

[neutron]
          
auth_plugin = password
auth_url = http://192.168.200.178:35357
username = neutron
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Restart nova services to apply changes

openstack-service restart nova

Check if nova works

openstack hypervisor list

Neutron

Comment or remove the following entries at api-paste.ini and add the new version auth lines

vi /etc/neutron/api-paste.ini 

[filter:authtoken]
#identity_uri=http://192.168.200.178:35357
#admin_user=neutron
#admin_password=PASSWD1234
#auth_uri=http://192.168.200.178:5000/v2.0
#admin_tenant_name=services

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = neutron
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Configure v3 authentication for metadata service, remember comment the old auth lines

vi /etc/neutron/metadata_agent.ini

[DEFAULT]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = neutron
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Configure neutron server with v3 auth

vi /etc/neutron/neutron.conf

nova_admin_auth_url = http://192.168.200.178:5000
# nova_admin_tenant_id =1fb93c84c6474c5ea92c0ed5f7d4a6a7
nova_admin_tenant_name = services


[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = neutron
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

#auth_uri = http://192.168.200.178:5000/v2.0
#identity_uri = http://192.168.200.178:35357
#admin_tenant_name = services
#admin_user = neutron
#admin_password = PASSWD1234

Configure neutron auth against nova services

[nova]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = nova
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

Restart neutron services to apply changes

openstack-service restart neutron

Test correct neutron funtionality

openstack network list

Cinder

Edit api-paste.ini with the following content

vi /etc/cinder/api-paste.ini 

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_plugin = password
auth_url = http://192.168.200.178:35357
username = cinder
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000
#admin_tenant_name=services
#auth_uri=http://192.168.200.178:5000/v2.0
#admin_user=cinder
#identity_uri=http://192.168.200.178:35357
#admin_password=PASSWD1234

Restart cinder services to apply changes

openstack-service restart cinder

Ensure cinder is properly running

openstack volume create --size 1 testvolume
openstack volume list

Now, you can check if nova is working fine, create an instance and ensure it is in ACTIVE state.

openstack server create --flavor m1.tiny --image cirros --nic net-id=a1aa6336-9ae2-4ffb-99f5-1b6d1130989c testinstance
openstack server list

If any error occurs, review configuration files

Swift

Configure proxy server auth agains keystone v3

vi /etc/swift/proxy-server.conf

[filter:authtoken]
log_name = swift
signing_dir = /var/cache/swift
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_plugin = password
auth_url = http://192.168.200.178:35357
username = swift
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

#auth_uri = http://192.168.200.178:5000/v2.0
#identity_uri = http://192.168.200.178:35357
#admin_tenant_name = services
#admin_user = swift
#admin_password = PASSWD1234
delay_auth_decision = 1
cache = swift.cache
include_service_catalog = False

Restart swift services to apply changes

openstack-service restart swift

Swift commands must be issued with python-openstackclient instead of swiftclient
If done with swiftclient a -V 3 option must be used in order to avoid issues

Check if swift works fine

openstack container create testcontainer

Ceilometer

Configure ceilometer service in order to authenticate agains keystone v3

[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = ceilometer
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

[service_credentials]

os_auth_url = http://controller:5000/v3
os_username = ceilometer
os_tenant_name = services
os_password = PASSWD1234
os_endpoint_type = internalURL
os_region_name = RegionOne

Restart ceilometer services

openstack-service restart ceilometer

Check ceilometer funtionality

ceilometer statistics -m memory

Heat

Configure Heat authentication, since trusts are not stable use password auth method

vi /etc/heat/heat.conf

# Allowed values: password, trusts
#deferred_auth_method = trusts
deferred_auth_method = password

Configure auth_uri and keystone_authtoken section

# From heat.common.config
#
# Unversioned keystone url in format like http://0.0.0.0:5000. (string value)
#auth_uri =
auth_uri = http://192.168.200.178:5000

[keystone_authtoken]

auth_plugin = password
auth_url = http://192.168.200.178:35357
username = heat
password = PASSWD1234
project_name = services
user_domain_name = Default
project_domain_name = Default
auth_uri=http://192.168.200.178:5000

#admin_user=heat
#admin_password=PASSWD1234
#admin_tenant_name=services
#identity_uri=http://192.168.200.178:35357
#auth_uri=http://192.168.200.178:5000/v2.0

Comment or remove heat-dist auth entries in order to avoid conflicts with your config files

vi /usr/share/heat/heat-dist.conf 

[keystone_authtoken]
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http
#auth_uri = http://127.0.0.1:5000/v2.0
#signing_dir = /tmp/keystone-signing-heat

Restart heat services to apply changes

openstack-service restart heat

Ensure heat authentication is properly configured with a simple heat template

heat stack-create --template-file sample.yaml teststack

Most issues occurs in the authentication between nova and neutron services, if instances does not launch as expected, review [nova] and [neutron] sections.

Best regards, Eduardo Gonzalez

Configure Neutron DVR OpenStack Liberty

Distributed Virtual Routers aka DVR were created to avoid single point of failure on neutron nodes.
When using standard routers, all the traffic is passing out through Neutron servers. Inside network servers, router namespaces are created routing all traffic and NAT forwarding between instances and public networks. When a network node falls down, instance traffic will no longer be available until a new namespace is created and executed in another network node.
Distributed routers is a way to avoid the SPOF neutron nodes were. When using DVR, router namespaces, are directly created inside compute nodes where all instance and l3 traffic are routed.

If you want to know more about DVR check this awesome links:
http://blog.gampel.net/2014/12/openstack-neutron-distributed-virtual.html
http://blog.gampel.net/2014/12/openstack-dvr2-floating-ips.html
http://blog.gampel.net/2015/01/openstack-DVR-SNAT.html

A previous OpenStack Liberty installation is required, mine was done with RDO packstack.

Configure all Neutron Servers

Edit ml2 configuration file with the following:

# vi /etc/neutron/plugins/ml2/ml2_conf.ini

mechanism_drivers = openvswitch,l2population
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
vni_ranges = 10:100
vxlan_group = 224.1.1.1
enable_security_group = True

Edit neutron configuration file, enable DVR and uncomment dvr_base_mac option

# vi /etc/neutron/neutron.conf

router_distributed = True
dvr_base_mac = fa:16:3f:00:00:00

Configure l3 agent to use dvr_snat

# vi /etc/neutron/l3_agent.ini

agent_mode = dvr_snat

Restart neutron server

systemctl restart neutron-server

Configure all Compute Nodes

Install ml2 package

yum install openstack-neutron-ml2

Edit openvswitch agent file as below:

# vi /etc/neutron/plugins/ml2/openvswitch_agent.ini 

l2_population = True
arp_responder = True
enable_distributed_routing = True

Enable DVR and select an interface driver to be used by l3 agent

# vi /etc/neutron/l3_agent.ini

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
agent_mode = dvr

Edit ml2 configuration file as below:

# vi /etc/neutron/plugins/ml2/ml2_conf.ini

type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
vni_ranges = 10:100
vxlan_group = 224.1.1.1
enable_security_group = True

Start and enable metadata agent in compute nodes

systemctl start neutron-l3-agent neutron-metadata-agent
systemctl enable neutron-l3-agent neutron-metadata-agent

Create an external bridge with an external IP associated on it

# vi /etc/sysconfig/network-scripts/ifcfg-br-ex

DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=192.168.100.4                                                          
NETMASK=255.255.255.0
GATEWAY=192.168.100.1
ONBOOT=yes

Modify an unused interface connected with the same network as the IP configured with br-ex, edit the interface to be used as OVS port by br-ex

# vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

Restart network service to apply changes on the interfaces and openvswith-agent

systemctl restart network
systemctl restart neutron-openvswitch-agent

Create an external network and a subnet on it

neutron net-create external_network --provider:network_type flat --provider:physical_network extnet  --router:external --shared
neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=192.168.100.100,end=192.168.100.150 --gateway=192.168.100.1 external_network 192.168.100.0/24

Create a router and associate external network as router gateway

neutron router-create router1
neutron router-gateway-set router1 external_network

Create an internal network, a subnet and associate an interface to the router

neutron net-create private_network
neutron subnet-create --name private_subnet private_network 10.0.1.0/24
neutron router-interface-add router1 private_subnet

Boot 2 instances

nova boot --flavor m1.tiny --image cirros --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df test1
nova boot --flavor m1.tiny --image cirros --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df test2

Create 2 floating ips and associate it to instances

neutron floatingip-create external_network
neutron floatingip-create external_network
nova floating-ip-associate test1 192.168.100.101
nova floating-ip-associate test2 192.168.100.102

Test if all works as expected pinging floating ips

# ping 192.168.100.101
# ping 192.168.100.102

As you can see, in network nodes, a snat namespace is created

# sudo ip netns
qdhcp-154da7a8-fa49-415e-9d35-c840b144a8df
snat-77fef58a-6d0c-4e96-b4b6-5d8e81ebead3

In compute nodes, a fip namespace per instance with floating ip associated running on the compute node are created and a qrouter namespace are created.

# sudo ip netns
fip-4dfdabb0-d2d6-4d4a-8c00-84df834eec8b
qrouter-77fef58a-6d0c-4e96-b4b6-5d8e81ebead3

Best regards, Eduardo Gonzalez

1 3 4 5 6 7 32
%d bloggers like this: