Ceph-ansible baremetal deployment

How many times you tried to install Ceph? How many fails with no reason?
All Ceph operator should agree with me when i say that Ceph installer doesn’t really works as expected so far.
Yes, i’m talking about ceph-deploy and the main reason why i’m posting this guide about deploying Ceph with Ansible.

At this post, i will show how to install a Ceph cluster with Ansible on baremetal servers.
My configuration is as follows:

  1. 3 x ceph monitors 8GB of RAM each one
  2. 3 x OSD nodes 16GB of RAM and 3×100 GB of Disk
  3. 1 x RadosGateway node 8GB of RAM

First, download Ceph-Ansible playbooks

git clone https://github.com/ceph/ceph-ansible/
Cloning into 'ceph-ansible'...
remote: Counting objects: 5764, done.
remote: Compressing objects: 100% (38/38), done.
remote: Total 5764 (delta 7), reused 0 (delta 0), pack-reused 5726
Receiving objects: 100% (5764/5764), 1.12 MiB | 1.06 MiB/s, done.
Resolving deltas: 100% (3465/3465), done.
Checking connectivity... done.

Move to the newly created folder called ceph-ansible

cd ceph-ansible/

Copy sample vars files, we will configure our environment in these variable files.

cp site.yml.sample site.yml
cp group_vars/all.sample group_vars/all
cp group_vars/mons.sample group_vars/mons
cp group_vars/osds.sample group_vars/osds

Next step is configure the inventory with our servers, i don’t really like use /etc/ansible/host file, i prefer create a new file per environment inside playbook’s folder.

Create a file with the following content, use you own IPs to match your servers on the desired role inside the cluster

[root@ansible ~]# vi inventory_hosts

[mons]
192.168.1.48
192.168.1.49
192.168.1.52

[osds]
192.168.1.50
192.168.1.53
192.168.1.54

[rgws]
192.168.1.55

Test connectivity to you servers pinging them through Ansible ping module

[root@ansible ~]# ansible -m ping -i inventory_hosts all
192.168.1.48 | success >> {
    "changed": false,
    "ping": "pong"
}

192.168.1.50 | success >> {
    "changed": false,
    "ping": "pong"
}

192.168.1.55 | success >> {
    "changed": false,
    "ping": "pong"
}

192.168.1.53 | success >> {
    "changed": false,
    "ping": "pong"
}

192.168.1.49 | success >> {
    "changed": false,
    "ping": "pong"
}

192.168.1.54 | success >> {
    "changed": false,
    "ping": "pong"
}

192.168.1.52 | success >> {
    "changed": false,
    "ping": "pong"
}

Edit site.yml file, i will remove/comment mds nodes since i’m not going to use them.

[root@ansible ~]# vi site.yml

- hosts: mons
  become: True
  roles:
  - ceph-mon

- hosts: agents
  become: True
  roles:
  - ceph-agent

- hosts: osds
  become: True
  roles:
  - ceph-osd

#- hosts: mdss
#  become: True
#  roles:
#  - ceph-mds

- hosts: rgws
  become: True
  roles:
  - ceph-rgw

- hosts: restapis
  become: True
  roles:
  - ceph-restapi

Edit main variable file, here we are going to configure our environment

[root@ansible ~]# vi group_vars/all

Here we configure from where ceph packages are going to be installed, for now we use upstream code with the stable release Infernalis.

## Configure package origin
ceph_origin: upstream
ceph_stable: true
ceph_stable_release: infernalis

Configure interface on which monitor will be listening

## Monitor options
monitor_interface: eth2

Here we configure some OSD options, like journal size and what networks will be used by public and cluster data replication

## OSD options
journal_size: 1024
public_network: 192.168.1.0/24
cluster_network: 192.168.200.0/24

Edit osds variable file

[root@ansible ~]# vi group_vars/osds

I will use auto discovery option to allow ceph ansible select empy or not used devices in my servers to create OSDs.

# Declare devices
osd_auto_discovery: True
journal_collocation: True

Of course you can use other options, i’ll highly suggest you to read variable comments, as they provide valuable information about usage.
We’re ready to deploy ceph with ansible with our custom inventory_hosts file.

[root@ansible ~]# ansible-playbook site.yml -i inventory_hosts

After a while, you will have a fully functional ceph cluster.

Maybe you find some issues or bugs when running the playbooks.
There is a lot of efforts to fix issues on upstream repository. If a new bug is encountered, please, post a issue right here.
https://github.com/ceph/ceph-ansible/issues

You can check your cluster status with ceph -s. we can see all OSDs are up and pgs active/clean.

[root@ceph-mon1 ~]# ceph -s
    cluster 5ff692ab-2150-41a4-8b6d-001a4da21c9c
     health HEALTH_OK
     monmap e1: 3 mons at {ceph-mon1=192.168.200.141:6789/0,ceph-mon2=192.168.200.180:6789/0,ceph-mon3=192.168.200.232:6789/0}
            election epoch 6, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
     osdmap e10: 9 osds: 9 up, 9 in
            flags sortbitwise
      pgmap v32: 64 pgs, 1 pools, 0 bytes data, 0 objects
            102256 kB used, 896 GB / 896 GB avail
                  64 active+clean

We are going to do some tests.
Create a pool

[root@ceph-mon1 ~]# ceph osd pool create test 128 128
pool 'test' created

Create a file big file

[root@ceph-mon1 ~]# dd if=/dev/zero of=/tmp/sample.txt bs=2M count=1000
1000+0 records in
1000+0 records out
2097152000 bytes (2.1 GB) copied, 16.7386 s, 125 MB/s

Upload the file to rados

[root@ceph-mon1 ~]# rados -p test put sample /tmp/sample.txt 

Check om which placement groups your file is saved

[root@ceph-mon1 ~]# ceph osd map test sample
osdmap e13 pool 'test' (1) object 'sample' -> pg 1.bddbf0b9 (1.39) -> up ([1,0], p1) acting ([1,0], p1)

Query the placement group where you file was uploaded, a similar output will prompts

[root@ceph-mon1 ~]# ceph pg 1.39 query
{
    "state": "active+clean",
    "snap_trimq": "[]",
    "epoch": 13,
    "up": [
        1,
        0
    ],
    "acting": [
        1,
        0
    ],
    "actingbackfill": [
        "0",
        "1"
    ],
    "info": {
        "pgid": "1.39",
        "last_update": "13'500",
        "last_complete": "13'500",
        "log_tail": "0'0",
        "last_user_version": 500,
        "last_backfill": "MAX",
        "last_backfill_bitwise": 0,
        "purged_snaps": "[]",
        "history": {
            "epoch_created": 11,
            "last_epoch_started": 12,
            "last_epoch_clean": 13,
            "last_epoch_split": 0,
            "last_epoch_marked_full": 0,
            "same_up_since": 11,
            "same_interval_since": 11,
            "same_primary_since": 11,
            "last_scrub": "0'0",
            "last_scrub_stamp": "2016-03-16 21:13:08.883121",
            "last_deep_scrub": "0'0",
            "last_deep_scrub_stamp": "2016-03-16 21:13:08.883121",
            "last_clean_scrub_stamp": "0.000000"
        },
        "stats": {
            "version": "13'500",
            "reported_seq": "505",
            "reported_epoch": "13",
            "state": "active+clean",
            "last_fresh": "2016-03-16 21:24:40.930724",
            "last_change": "2016-03-16 21:14:09.874086",
            "last_active": "2016-03-16 21:24:40.930724",
            "last_peered": "2016-03-16 21:24:40.930724",
            "last_clean": "2016-03-16 21:24:40.930724",
            "last_became_active": "0.000000",
            "last_became_peered": "0.000000",
            "last_unstale": "2016-03-16 21:24:40.930724",
            "last_undegraded": "2016-03-16 21:24:40.930724",
            "last_fullsized": "2016-03-16 21:24:40.930724",
            "mapping_epoch": 11,
            "log_start": "0'0",
            "ondisk_log_start": "0'0",
            "created": 11,
            "last_epoch_clean": 13,
            "parent": "0.0",
            "parent_split_bits": 0,
            "last_scrub": "0'0",
            "last_scrub_stamp": "2016-03-16 21:13:08.883121",
            "last_deep_scrub": "0'0",
            "last_deep_scrub_stamp": "2016-03-16 21:13:08.883121",
            "last_clean_scrub_stamp": "0.000000",
            "log_size": 500,
            "ondisk_log_size": 500,
            "stats_invalid": "0",
            "stat_sum": {
                "num_bytes": 2097152000,
                "num_objects": 1,
                "num_object_clones": 0,
                "num_object_copies": 2,
                "num_objects_missing_on_primary": 0,
                "num_objects_degraded": 0,
                "num_objects_misplaced": 0,
                "num_objects_unfound": 0,
                "num_objects_dirty": 1,
                "num_whiteouts": 0,
                "num_read": 0,
                "num_read_kb": 0,
                "num_write": 500,
                "num_write_kb": 2048000,
                "num_scrub_errors": 0,
                "num_shallow_scrub_errors": 0,
                "num_deep_scrub_errors": 0,
                "num_objects_recovered": 0,
                "num_bytes_recovered": 0,
                "num_keys_recovered": 0,
                "num_objects_omap": 0,
                "num_objects_hit_set_archive": 0,
                "num_bytes_hit_set_archive": 0,
                "num_flush": 0,
                "num_flush_kb": 0,
                "num_evict": 0,
                "num_evict_kb": 0,
                "num_promote": 0,
                "num_flush_mode_high": 0,
                "num_flush_mode_low": 0,
                "num_evict_mode_some": 0,
                "num_evict_mode_full": 0
            },
            "up": [
                1,
                0
            ],
            "acting": [
                1,
                0
            ],
            "blocked_by": [],
            "up_primary": 1,
            "acting_primary": 1
        },
        "empty": 0,
        "dne": 0,
        "incomplete": 0,
        "last_epoch_started": 12,
        "hit_set_history": {
            "current_last_update": "0'0",
            "history": []
        }
    },
    "peer_info": [
        {
            "peer": "0",
            "pgid": "1.39",
            "last_update": "13'500",
            "last_complete": "13'500",
            "log_tail": "0'0",
            "last_user_version": 0,
            "last_backfill": "MAX",
            "last_backfill_bitwise": 0,
            "purged_snaps": "[]",
            "history": {
                "epoch_created": 11,
                "last_epoch_started": 12,
                "last_epoch_clean": 13,
                "last_epoch_split": 0,
                "last_epoch_marked_full": 0,
                "same_up_since": 0,
                "same_interval_since": 0,
                "same_primary_since": 0,
                "last_scrub": "0'0",
                "last_scrub_stamp": "2016-03-16 21:13:08.883121",
                "last_deep_scrub": "0'0",
                "last_deep_scrub_stamp": "2016-03-16 21:13:08.883121",
                "last_clean_scrub_stamp": "0.000000"
            },
            "stats": {
                "version": "0'0",
                "reported_seq": "0",
                "reported_epoch": "0",
                "state": "inactive",
                "last_fresh": "0.000000",
                "last_change": "0.000000",
                "last_active": "0.000000",
                "last_peered": "0.000000",
                "last_clean": "0.000000",
                "last_became_active": "0.000000",
                "last_became_peered": "0.000000",
                "last_unstale": "0.000000",
                "last_undegraded": "0.000000",
                "last_fullsized": "0.000000",
                "mapping_epoch": 0,
                "log_start": "0'0",
                "ondisk_log_start": "0'0",
                "created": 0,
                "last_epoch_clean": 0,
                "parent": "0.0",
                "parent_split_bits": 0,
                "last_scrub": "0'0",
                "last_scrub_stamp": "0.000000",
                "last_deep_scrub": "0'0",
                "last_deep_scrub_stamp": "0.000000",
                "last_clean_scrub_stamp": "0.000000",
                "log_size": 0,
                "ondisk_log_size": 0,
                "stats_invalid": "0",
                "stat_sum": {
                    "num_bytes": 0,
                    "num_objects": 0,
                    "num_object_clones": 0,
                    "num_object_copies": 0,
                    "num_objects_missing_on_primary": 0,
                    "num_objects_degraded": 0,
                    "num_objects_misplaced": 0,
                    "num_objects_unfound": 0,
                    "num_objects_dirty": 0,
                    "num_whiteouts": 0,
                    "num_read": 0,
                    "num_read_kb": 0,
                    "num_write": 0,
                    "num_write_kb": 0,
                    "num_scrub_errors": 0,
                    "num_shallow_scrub_errors": 0,
                    "num_deep_scrub_errors": 0,
                    "num_objects_recovered": 0,
                    "num_bytes_recovered": 0,
                    "num_keys_recovered": 0,
                    "num_objects_omap": 0,
                    "num_objects_hit_set_archive": 0,
                    "num_bytes_hit_set_archive": 0,
                    "num_flush": 0,
                    "num_flush_kb": 0,
                    "num_evict": 0,
                    "num_evict_kb": 0,
                    "num_promote": 0,
                    "num_flush_mode_high": 0,
                    "num_flush_mode_low": 0,
                    "num_evict_mode_some": 0,
                    "num_evict_mode_full": 0
                },
                "up": [],
                "acting": [],
                "blocked_by": [],
                "up_primary": -1,
                "acting_primary": -1
            },
            "empty": 0,
            "dne": 0,
            "incomplete": 0,
            "last_epoch_started": 12,
            "hit_set_history": {
                "current_last_update": "0'0",
                "history": []
            }
        }
    ],
    "recovery_state": [
        {
            "name": "Started\/Primary\/Active",
            "enter_time": "2016-03-16 21:13:36.769083",
            "might_have_unfound": [],
            "recovery_progress": {
                "backfill_targets": [],
                "waiting_on_backfill": [],
                "last_backfill_started": "MIN",
                "backfill_info": {
                    "begin": "MIN",
                    "end": "MIN",
                    "objects": []
                },
                "peer_backfill_info": [],
                "backfills_in_flight": [],
                "recovering": [],
                "pg_backend": {
                    "pull_from_peer": [],
                    "pushing": []
                }
            },
            "scrub": {
                "scrubber.epoch_start": "0",
                "scrubber.active": 0,
                "scrubber.waiting_on": 0,
                "scrubber.waiting_on_whom": []
            }
        },
        {
            "name": "Started",
            "enter_time": "2016-03-16 21:13:09.216260"
        }
    ],
    "agent_state": {}
}

That’s all for now.
Regards, Eduardo Gonzalez

Nova VNC flows under the hood

Most OpenStack deployments has a VNC console implemented with nova-novncproxy. This service gives the final user the ability to log into their instances in a web based method through a browser.

At this post i’m going to show how a vnc console request works under the hood while using the following command or lauching a vnc session through Horizon.

# nova get-vnc-console INSTANCE novnc

First of all, a user connects to NOVA and issues a VNC console request for an instance. Nova API needs to validate the user issuing an authentication request to keystone.
The user receives a token with nova’s endpoint URL in the catalog, with that endpoint and the token, the user makes a request against nova calling for a VNC session.

GET http://192.168.200.208:5000/v2.0 -H "Accept: application/json" -H \
"User-Agent: python-keystoneclient"

GET http://192.168.200.208:8774/v2/ -H "User-Agent: python-novaclient" -H \
"Accept: application/json" -H "X-Auth-Token: {SHA1}3b6262df9eaba5da33c1004805187806322201f1"

If a name instead of an instance ID is used in the request, Nova need to check his database to match that name with his corresponding ID, as we can see in the following request.

GET http://192.168.200.208:8774/v2/ee84411cdb8148d28674b129ef482f31/servers?name=test1 \
-H "User-Agent: python-novaclient" -H "Accept: application/json" \
-H "X-Auth-Token: {SHA1}3b6262df9eaba5da33c1004805187806322201f1"

RESP BODY: {"servers": [{"id": "9165dbda-f54e-4186-b2cb-e6ca05ac53ee", \
"links": [{"href": "http://192.168.200.208:8774/v2/ee84411cdb8148d28674b129ef482f31/servers/9165dbda-f54e-4186-b2cb-e6ca05ac53ee", "rel": "self"},\
 {"href": "http://192.168.200.208:8774/ee84411cdb8148d28674b129ef482f31/servers/9165dbda-f54e-4186-b2cb-e6ca05ac53ee", \
"rel": "bookmark"}], "name": "test1"}]}

Once the ID is matched with the name, Nova check information about the instance (I thought it was to validate if is in ACTIVE status, but i realized that even when is in STOPPED status the request is made it anyway).

GET http://192.168.200.208:8774/v2/ee84411cdb8148d28674b129ef482f31/servers/9165dbda-f54e-4186-b2cb-e6ca05ac53ee\
 -H "User-Agent: python-novaclient" -H "Accept: application/json" \
 -H "X-Auth-Token: {SHA1}3b6262df9eaba5da33c1004805187806322201f1"

RESP BODY: {"server": {"status": "ACTIVE", "updated": "2016-03-02T17:28:45Z", "hostId": "ca3a874dcad9079fcc6a0b10b0e2efaa394bc66b5335197fdd9c2498", "OS-EXT-SRV-ATTR:host": "liberty", "addresses": {"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:aa:1c:32", "version": 4, "addr": "10.0.0.6", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.200.208:8774/v2/ee84411cdb8148d28674b129ef482f31/servers/9165dbda-f54e-4186-b2cb-e6ca05ac53ee", "rel": "self"}, {"href": "http://192.168.200.208:8774/ee84411cdb8148d28674b129ef482f31/servers/9165dbda-f54e-4186-b2cb-e6ca05ac53ee", "rel": "bookmark"}], "key_name": null, "image": {"id": "bf31eadd-c5f4-40f8-9ddb-30f688ca5e5f", "links": [{"href": "http://192.168.200.208:8774/ee84411cdb8148d28674b129ef482f31/images/bf31eadd-c5f4-40f8-9ddb-30f688ca5e5f", "rel": "bookmark"}]}, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-SRV-USG:launched_at": "2016-03-02T17:28:45.000000", "OS-EXT-SRV-ATTR:hypervisor_hostname": "liberty", "flavor": {"id": "1", "links": [{"href": "http://192.168.200.208:8774/ee84411cdb8148d28674b129ef482f31/flavors/1", "rel": "bookmark"}]}, "id": "9165dbda-f54e-4186-b2cb-e6ca05ac53ee", "security_groups": [{"name": "default"}], "OS-SRV-USG:terminated_at": null, "OS-EXT-AZ:availability_zone": "nova", "user_id": "d9164a323be649c0a8c5c80fdd5bd585", "name": "test1", "created": "2016-03-02T17:28:34Z", "tenant_id": "ee84411cdb8148d28674b129ef482f31", "OS-DCF:diskConfig": "MANUAL", "os-extended-volumes:volumes_attached": [], "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, "config_drive": "", "metadata": {}}}

When we get the information, nova-api POST a request to nova-consoleauth for a VNC console.

POST http://192.168.200.208:8774/v2/ee84411cdb8148d28674b129ef482f31/servers/9165dbda-f54e-4186-b2cb-e6ca05ac53ee/action \
-H "User-Agent: python-novaclient" -H "Content-Type: application/json" \
-H "Accept: application/json" -H "X-Auth-Token: {SHA1}3b6262df9eaba5da33c1004805187806322201f1"\
-d '{"os-getVNCConsole": {"type": "novnc"}}'


DEBUG nova.api.openstack.wsgi [req-2201b9d6-5711-46d3-ac4d-669094f07527 \
d9164a323be649c0a8c5c80fdd5bd585 ee84411cdb8148d28674b129ef482f31 - - -] \
Action: 'action', calling method: , body: {"os-getVNCConsole": {"type": "novnc"}} \
_process_stack /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:789

Nova-consoleauth receives the console request and create an access URL while generates a temporary token for the vnc console.

INFO nova.consoleauth.manager [req-d4def6f9-1ab9-4626-b6a8-d81643ea5eb4 d9164a323be649c0a8c5c80fdd5bd585 ee84411cdb8148d28674b129ef482f31 - - -] \
Received Token: 3dfcd011-28f1-4cf3-8f5c-8cd18de4560e, \
{'instance_uuid': u'9165dbda-f54e-4186-b2cb-e6ca05ac53ee', \
'access_url': u'http://192.168.200.208:6080/vnc_auto.html?token=3dfcd011-28f1-4cf3-8f5c-8cd18de4560e',\
 'token': u'3dfcd011-28f1-4cf3-8f5c-8cd18de4560e', 'last_activity_at': 1456940028.356214, \
'internal_access_path': None, 'console_type': u'novnc', 'host': u'liberty', 'port': u'5900'}

Nova-consoleauth answer to nova-api who also answers to the user with an access URL.
This URL got the following content on it:

  • HTTP or HTTPS connection to nova-novncproxy IP
  • Nova-novncproxy port
  • A token to validate the VNC connection
RESP BODY: {"console": {"url": "http://192.168.200.208:6080/vnc_auto.html?token=3dfcd011-28f1-4cf3-8f5c-8cd18de4560e", "type": "novnc"}}

+-------+--------------------------------------------------------------------------------------+
| Type  | Url                                                                                  |
+-------+--------------------------------------------------------------------------------------+
| novnc | http://192.168.200.208:6080/vnc_auto.html?token=3dfcd011-28f1-4cf3-8f5c-8cd18de4560e |
+-------+--------------------------------------------------------------------------------------+

Until now, nova-novncproxy service can be stopped or isn’t used at all, is at this point the when proxy server enter into the game.
The user connects through a web browser to the nova-novncproxy’s URL provided by nova before.

DEBUG nova.console.websocketproxy [-] 192.168.200.1: \
new handler Process vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:828

Nova-vncproxy validate the issued token with the URL against nova-consoleauth.

nova.consoleauth.manager [req-399c7b58-700a-4779-b215-b12d10056813 - - - - -] \
Checking Token: 3dfcd011-28f1-4cf3-8f5c-8cd18de4560e, True

When the token is validated, nova-novncproxy maps compute’s node private IP (at this case port 5900) with the nova-novncproxy public IP(6080 port).

INFO nova.console.websocketproxy [req-399c7b58-700a-4779-b215-b12d10056813 - - - - -]\
   7: connect info: {u'instance_uuid': u'9165dbda-f54e-4186-b2cb-e6ca05ac53ee', u'\
internal_access_path': None, u'last_activity_at': 1456940028.356214, \
u'console_type': u'novnc', u'host': u'liberty', u'token': u'3dfcd011-28f1-4cf3-8f5c-8cd18de4560e', \
u'access_url': u'http://192.168.200.208:6080/vnc_auto.html?token=3dfcd011-28f1-4cf3-8f5c-8cd18de4560e'\
, u'port': u'5900'}

We can see how the python novncproxy process binds both IPs/port.

# ps aux | grep vnc
nova     14840  1.2  0.7 362096 41000 ?        S    18:53   0:14 /usr/bin/python2 /usr/bin/nova-novncproxy --web /usr/share/novnc/

# netstat -putona | grep 14840
tcp        0      0 192.168.200.208:6080    192.168.200.1:59918     ESTABLISHED 14840/python2        keepalive (3,13/0/0)
tcp        0      0 192.168.122.73:57764    192.168.122.73:5900     ESTABLISHED 14840/python2        keepalive (3,13/0/0)

Nova-novncproxy starts the connection between the instance and user’s browser session.

INFO nova.console.websocketproxy [req-399c7b58-700a-4779-b215-b12d10056813 - - - - -]\
   7: connecting to: liberty:5900

Libvirt connects a vnc console into the instance, as we can see at the xml provided by virsh command.
Also, port 5900 now is binded at qemu-kvm process.


# virsh dumpxml 2
...
<graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0' keymap='en-us'>
     <listen type='address' address='0.0.0.0'/>
   </graphics>
...
# netstat -putona | grep 5900
tcp        0      0 0.0.0.0:5900            0.0.0.0:*               LISTEN      5910/qemu-kvm        off (0.00/0/0)
tcp        0      0 192.168.122.73:5900     192.168.122.73:57702    ESTABLISHED 5910/qemu-kvm        off (0.00/0/0)
tcp        0      0 192.168.122.73:57702    192.168.122.73:5900     ESTABLISHED 11118/python2        keepalive (1,92/0/0)

Nova-novncproxy keeps the connection alive until browser session ends.

DEBUG nova.console.websocketproxy [-] \
Reaing zombies, active child count is 1 vmsg /usr/lib/python2.7/site-packages/websockify/websocket.py:828

When a token is not valid while authenticating against nova-consoleauth, we can see a message like the following.

INFO nova.console.websocketproxy [req-9164b32d-3ce1-441b-82c7-6c23c9a354d0 - - - - -] \
handler exception: The token '3dfcd011-28f1-4cf3-8f5c-8cd18de4560e' is invalid or has expired

Regards.
Eduardo Gonzalez

Ansible ini_file module, simplifying your DevOps life

If you don’t read docs, one day you’ll realize that your an idiot as i am|was.

A few days back, I’ve realized that i was using wrong all Ansible modules power since i started with it. What happened?

Most of the time i use Ansible is related to OpenStack configuration jobs. Almost, all OpenStack projects use INI formatted files for their configuration files.
When i started using Ansible, I searched on Google how to configure any kind of file with Ansible modules. Almost all blogs/forums that i saw, talked about lineinfile module. So i used these guidelines on my next few months, now i realize that i was using in the wrong way Ansible modules.

Ansible have a module called ini_file, you change values inside INI formatted files in a easy way , you don’t need to use complicated regular expressions to change a value in a file.

Here you have ini_file module usage docs: http://docs.ansible.com/ansible/ini_file_module.html

We are going to change Neutron user password in his dump config file, so we create a simple task on which we can see how ini_file module can be used.

- hosts: localhost
  tasks:
  - name: Change neutron user password
    ini_file:
      dest: ~/neutron.conf
      section: keystone_authtoken
      option: password
      value: 12345

Once the task has been applied, we can see how the values are applied in a proper ini style.

cat neutron.conf
[keystone_authtoken]
password = 12345

How many times you need to make a change in an INI formatted configuration file with Ansible and used lineinfile module?
If the answer is many times, it’s OK, you are a dump like me.

Regards, Eduardo Gonzalez

1 2 3 4 5 6 32
%d bloggers like this: