OpenDaylight in a Docker container

This is a quick guide to start a Docker container with OpenDaylight running on it.

Clone OpenDaylight integration repository

[egonzalez@localhost]$ git clone https://github.com/opendaylight/integration.git

Move to the directory where CentOS Dockerfile is saved.

[egonzalez@localhost]$ cd integration/packaging/docker/centos/

Build the new image, you can call it as your DockerHub name(in my case egonzalez90), so you can push it there later.
If you don’t want to create a new image, you can use my image. This step will download and start the new container: docker run -d egonzalez90/opendaylight

[egonzalez@localhost centos]$ docker build -t egonzalez90/opendaylight .

Sending build context to Docker daemon  7.68 kB
Step 1 : FROM centos:7
Trying to pull repository docker.io/library/centos ... 7: Pulling from library/centos
1544084fad81: Pull complete 
df0fc3863fbc: Pull complete 
a3d54b467fad: Pull complete 
a65193109361: Pull complete 
Digest: sha256:a9237ff42b09cc6f610bab60a36df913ef326178a92f3b61631331867178f982
Status: Downloaded newer image for docker.io/centos:7

 ---> a65193109361
Step 2 : MAINTAINER OpenDaylight Project <info@opendaylight.org>
 ---> Running in d3f98f949b11
 ---> 81a1bad2e3a7
Removing intermediate container d3f98f949b11
Step 3 : ADD opendaylight-3-candidate.repo /etc/yum.repos.d/
 ---> 069a9c60878e
Removing intermediate container b9afb18311f3
Step 4 : RUN yum update -y && yum install -y opendaylight
 ---> Running in 559b3970235d

[[[ PACKAGE INSTALLATION STUFF ]]]                                      

Complete!
 ---> 4003e5874b03
Removing intermediate container 559b3970235d
Step 5 : EXPOSE 162 179 1088 1790 1830 2400 2550 2551 2552 4189 4342 5005 5666 6633 6640 6653 7800 8000 8080 8101 8181 8383 12001
 ---> Running in 7defebe8b7e2
 ---> 9668a559bdac
Removing intermediate container 7defebe8b7e2
Step 6 : WORKDIR /opt/opendaylight
 ---> Running in 9298a116dd14
 ---> 5bf42f56e282
Removing intermediate container 9298a116dd14
Step 7 : CMD ./bin/karaf server
 ---> Running in e0a218941b15
 ---> c1a0db72dbbc
Removing intermediate container e0a218941b15
Successfully built c1a0db72dbbc

Once the image is built or downloaded, ensure you have it locally

[egonzalez@localhost]$ docker images | grep opendaylight
egonzalez90/opendaylight                              latest              c1a0db72dbbc        About a minute ago   740.6 MB

Start a new container in a detached mode.

[egonzalez@localhost]$ docker run -d egonzalez90/opendaylight
ae08898ba6adc30df012513dc6eac54943d9de8c8059e73ade185757fe684c6a
Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.

Check if the container is running with:

[egonzalez@localhost]$ docker ps | grep opendaylight 
ae08898ba6ad        egonzalez90/opendaylight   "./bin/karaf server"     14 seconds ago      Up 11 seconds       162/tcp, 179/tcp, 1088/tcp, 1790/tcp, 1830/tcp, 2400/tcp, 2550-2552/tcp, 4189/tcp, 4342/tcp, 5005/tcp, 5666/tcp, 6633/tcp, 6640/tcp, 6653/tcp, 7800/tcp, 8000/tcp, 8080/tcp, 8101/tcp, 8181/tcp, 8383/tcp, 12001/tcp   awesome_khorana

Now, check container information with docker inspect, we search for the IP address

[egonzalez@localhost]$ docker inspect  ae08898ba6ad | grep -i IPAddress
        "SecondaryIPAddresses": null,
        "IPAddress": "172.17.0.3",
                "IPAddress": "172.17.0.3",

Now you know the container IP address, to login into karaf, first we need to download and install karaf client tool
Go to the following URL to download the package: http://www.apache.org/dyn/closer.lua/karaf/4.0.5/apache-karaf-4.0.5.tar.gz

Extract the files and move to the new directory

[egonzalez@localhost Downloads]$ tar -xzvf apache-karaf-4.0.5.tar.gz 
[egonzalez@localhost Downloads]$ cd apache-karaf-4.0.5/

Execute the client authenticating with the container IP

[egonzalez@localhost apache-karaf-4.0.5]$ ./bin/client -a 8101 -h 172.17.0.3 -u karaf -v
client: JAVA_HOME not set; results may vary
13 [main] INFO org.apache.sshd.common.util.SecurityUtils - BouncyCastle not registered, using the default JCE provider
Logging in as karaf
194 [sshd-SshClient[12bb4df8]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Client session created
203 [main] INFO org.apache.sshd.client.session.ClientSessionImpl - Start flagging packets as pending until key exchange is done
204 [sshd-SshClient[12bb4df8]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Server version string: SSH-2.0-SSHD-CORE-0.12.0
321 [sshd-SshClient[12bb4df8]-nio2-thread-3] WARN org.apache.sshd.client.keyverifier.AcceptAllServerKeyVerifier - Server at /172.17.0.3:8101 presented unverified DSA key: 09:a0:45:95:7a:dd:94:7c:6b:c3:f9:c0:23:88:1d:b0
324 [sshd-SshClient[12bb4df8]-nio2-thread-3] INFO org.apache.sshd.client.session.ClientSessionImpl - Dequeing pending packets
327 [sshd-SshClient[12bb4df8]-nio2-thread-4] INFO org.apache.sshd.client.session.ClientUserAuthServiceNew - Received SSH_MSG_USERAUTH_FAILURE
338 [sshd-SshClient[12bb4df8]-nio2-thread-5] INFO org.apache.sshd.client.session.ClientUserAuthServiceNew - Received SSH_MSG_USERAUTH_FAILURE
341 [sshd-SshClient[12bb4df8]-nio2-thread-6] INFO org.apache.sshd.client.auth.UserAuthKeyboardInteractive - Received Password authentication  en-US
344 [sshd-SshClient[12bb4df8]-nio2-thread-7] INFO org.apache.sshd.client.session.ClientUserAuthServiceNew - Received SSH_MSG_USERAUTH_SUCCESS
                                                                                           
    ________                       ________                .__  .__       .__     __       
    \_____  \ ______   ____   ____ \______ \ _____  ___.__.|  | |__| ____ |  |___/  |_     
     /   |   \\____ \_/ __ \ /    \ |    |  \\__  \< | || | | |/ ___\| | \ __\ / | \ |_> >  ___/|   |  \|    `   \/ __ \\___  ||  |_|  / /_/  >   Y  \  |      
    \_______  /   __/ \___  >___|  /_______  (____  / ____||____/__\___  /|___|  /__|      
            \/|__|        \/     \/        \/     \/\/            /_____/      \/          
                                                                                           

Hit '' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '' or type 'system:shutdown' or 'logout' to shutdown OpenDaylight.

Once karaf login succeed, install a few features like DLUX

opendaylight-user@root>feature:install odl-restconf odl-l2switch-switch odl-mdsal-apidocs odl-dlux-core

Now you can login at the container IP with admin as username and password.

http://172.17.0.3:8181/index.html

Selection_001

Best regards

Ceph-ansible baremetal deployment

How many times you tried to install Ceph? How many fails with no reason?
All Ceph operator should agree with me when i say that Ceph installer doesn’t really works as expected so far.
Yes, i’m talking about ceph-deploy and the main reason why i’m posting this guide about deploying Ceph with Ansible.

At this post, i will show how to install a Ceph cluster with Ansible on baremetal servers.
My configuration is as follows:

  1. 3 x ceph monitors 8GB of RAM each one
  2. 3 x OSD nodes 16GB of RAM and 3×100 GB of Disk
  3. 1 x RadosGateway node 8GB of RAM

First, download Ceph-Ansible playbooks

git clone https://github.com/ceph/ceph-ansible/
Cloning into 'ceph-ansible'...
remote: Counting objects: 5764, done.
remote: Compressing objects: 100% (38/38), done.
remote: Total 5764 (delta 7), reused 0 (delta 0), pack-reused 5726
Receiving objects: 100% (5764/5764), 1.12 MiB | 1.06 MiB/s, done.
Resolving deltas: 100% (3465/3465), done.
Checking connectivity... done.

Move to the newly created folder called ceph-ansible

cd ceph-ansible/

Copy sample vars files, we will configure our environment in these variable files.

cp site.yml.sample site.yml
cp group_vars/all.sample group_vars/all
cp group_vars/mons.sample group_vars/mons
cp group_vars/osds.sample group_vars/osds

Next step is configure the inventory with our servers, i don’t really like use /etc/ansible/host file, i prefer create a new file per environment inside playbook’s folder.

Create a file with the following content, use you own IPs to match your servers on the desired role inside the cluster

[root@ansible ~]# vi inventory_hosts

[mons]
192.168.1.48
192.168.1.49
192.168.1.52

[osds]
192.168.1.50
192.168.1.53
192.168.1.54

[rgws]
192.168.1.55

Test connectivity to you servers pinging them through Ansible ping module

[root@ansible ~]# ansible -m ping -i inventory_hosts all
192.168.1.48 | success >> {
    "changed": false,
    "ping": "pong"
}

192.168.1.50 | success >> {
    "changed": false,
    "ping": "pong"
}

192.168.1.55 | success >> {
    "changed": false,
    "ping": "pong"
}

192.168.1.53 | success >> {
    "changed": false,
    "ping": "pong"
}

192.168.1.49 | success >> {
    "changed": false,
    "ping": "pong"
}

192.168.1.54 | success >> {
    "changed": false,
    "ping": "pong"
}

192.168.1.52 | success >> {
    "changed": false,
    "ping": "pong"
}

Edit site.yml file, i will remove/comment mds nodes since i’m not going to use them.

[root@ansible ~]# vi site.yml

- hosts: mons
  become: True
  roles:
  - ceph-mon

- hosts: agents
  become: True
  roles:
  - ceph-agent

- hosts: osds
  become: True
  roles:
  - ceph-osd

#- hosts: mdss
#  become: True
#  roles:
#  - ceph-mds

- hosts: rgws
  become: True
  roles:
  - ceph-rgw

- hosts: restapis
  become: True
  roles:
  - ceph-restapi

Edit main variable file, here we are going to configure our environment

[root@ansible ~]# vi group_vars/all

Here we configure from where ceph packages are going to be installed, for now we use upstream code with the stable release Infernalis.

## Configure package origin
ceph_origin: upstream
ceph_stable: true
ceph_stable_release: infernalis

Configure interface on which monitor will be listening

## Monitor options
monitor_interface: eth2

Here we configure some OSD options, like journal size and what networks will be used by public and cluster data replication

## OSD options
journal_size: 1024
public_network: 192.168.1.0/24
cluster_network: 192.168.200.0/24

Edit osds variable file

[root@ansible ~]# vi group_vars/osds

I will use auto discovery option to allow ceph ansible select empy or not used devices in my servers to create OSDs.

# Declare devices
osd_auto_discovery: True
journal_collocation: True

Of course you can use other options, i’ll highly suggest you to read variable comments, as they provide valuable information about usage.
We’re ready to deploy ceph with ansible with our custom inventory_hosts file.

[root@ansible ~]# ansible-playbook site.yml -i inventory_hosts

After a while, you will have a fully functional ceph cluster.

Maybe you find some issues or bugs when running the playbooks.
There is a lot of efforts to fix issues on upstream repository. If a new bug is encountered, please, post a issue right here.
https://github.com/ceph/ceph-ansible/issues

You can check your cluster status with ceph -s. we can see all OSDs are up and pgs active/clean.

[root@ceph-mon1 ~]# ceph -s
    cluster 5ff692ab-2150-41a4-8b6d-001a4da21c9c
     health HEALTH_OK
     monmap e1: 3 mons at {ceph-mon1=192.168.200.141:6789/0,ceph-mon2=192.168.200.180:6789/0,ceph-mon3=192.168.200.232:6789/0}
            election epoch 6, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
     osdmap e10: 9 osds: 9 up, 9 in
            flags sortbitwise
      pgmap v32: 64 pgs, 1 pools, 0 bytes data, 0 objects
            102256 kB used, 896 GB / 896 GB avail
                  64 active+clean

We are going to do some tests.
Create a pool

[root@ceph-mon1 ~]# ceph osd pool create test 128 128
pool 'test' created

Create a file big file

[root@ceph-mon1 ~]# dd if=/dev/zero of=/tmp/sample.txt bs=2M count=1000
1000+0 records in
1000+0 records out
2097152000 bytes (2.1 GB) copied, 16.7386 s, 125 MB/s

Upload the file to rados

[root@ceph-mon1 ~]# rados -p test put sample /tmp/sample.txt 

Check om which placement groups your file is saved

[root@ceph-mon1 ~]# ceph osd map test sample
osdmap e13 pool 'test' (1) object 'sample' -> pg 1.bddbf0b9 (1.39) -> up ([1,0], p1) acting ([1,0], p1)

Query the placement group where you file was uploaded, a similar output will prompts

[root@ceph-mon1 ~]# ceph pg 1.39 query
{
    "state": "active+clean",
    "snap_trimq": "[]",
    "epoch": 13,
    "up": [
        1,
        0
    ],
    "acting": [
        1,
        0
    ],
    "actingbackfill": [
        "0",
        "1"
    ],
    "info": {
        "pgid": "1.39",
        "last_update": "13'500",
        "last_complete": "13'500",
        "log_tail": "0'0",
        "last_user_version": 500,
        "last_backfill": "MAX",
        "last_backfill_bitwise": 0,
        "purged_snaps": "[]",
        "history": {
            "epoch_created": 11,
            "last_epoch_started": 12,
            "last_epoch_clean": 13,
            "last_epoch_split": 0,
            "last_epoch_marked_full": 0,
            "same_up_since": 11,
            "same_interval_since": 11,
            "same_primary_since": 11,
            "last_scrub": "0'0",
            "last_scrub_stamp": "2016-03-16 21:13:08.883121",
            "last_deep_scrub": "0'0",
            "last_deep_scrub_stamp": "2016-03-16 21:13:08.883121",
            "last_clean_scrub_stamp": "0.000000"
        },
        "stats": {
            "version": "13'500",
            "reported_seq": "505",
            "reported_epoch": "13",
            "state": "active+clean",
            "last_fresh": "2016-03-16 21:24:40.930724",
            "last_change": "2016-03-16 21:14:09.874086",
            "last_active": "2016-03-16 21:24:40.930724",
            "last_peered": "2016-03-16 21:24:40.930724",
            "last_clean": "2016-03-16 21:24:40.930724",
            "last_became_active": "0.000000",
            "last_became_peered": "0.000000",
            "last_unstale": "2016-03-16 21:24:40.930724",
            "last_undegraded": "2016-03-16 21:24:40.930724",
            "last_fullsized": "2016-03-16 21:24:40.930724",
            "mapping_epoch": 11,
            "log_start": "0'0",
            "ondisk_log_start": "0'0",
            "created": 11,
            "last_epoch_clean": 13,
            "parent": "0.0",
            "parent_split_bits": 0,
            "last_scrub": "0'0",
            "last_scrub_stamp": "2016-03-16 21:13:08.883121",
            "last_deep_scrub": "0'0",
            "last_deep_scrub_stamp": "2016-03-16 21:13:08.883121",
            "last_clean_scrub_stamp": "0.000000",
            "log_size": 500,
            "ondisk_log_size": 500,
            "stats_invalid": "0",
            "stat_sum": {
                "num_bytes": 2097152000,
                "num_objects": 1,
                "num_object_clones": 0,
                "num_object_copies": 2,
                "num_objects_missing_on_primary": 0,
                "num_objects_degraded": 0,
                "num_objects_misplaced": 0,
                "num_objects_unfound": 0,
                "num_objects_dirty": 1,
                "num_whiteouts": 0,
                "num_read": 0,
                "num_read_kb": 0,
                "num_write": 500,
                "num_write_kb": 2048000,
                "num_scrub_errors": 0,
                "num_shallow_scrub_errors": 0,
                "num_deep_scrub_errors": 0,
                "num_objects_recovered": 0,
                "num_bytes_recovered": 0,
                "num_keys_recovered": 0,
                "num_objects_omap": 0,
                "num_objects_hit_set_archive": 0,
                "num_bytes_hit_set_archive": 0,
                "num_flush": 0,
                "num_flush_kb": 0,
                "num_evict": 0,
                "num_evict_kb": 0,
                "num_promote": 0,
                "num_flush_mode_high": 0,
                "num_flush_mode_low": 0,
                "num_evict_mode_some": 0,
                "num_evict_mode_full": 0
            },
            "up": [
                1,
                0
            ],
            "acting": [
                1,
                0
            ],
            "blocked_by": [],
            "up_primary": 1,
            "acting_primary": 1
        },
        "empty": 0,
        "dne": 0,
        "incomplete": 0,
        "last_epoch_started": 12,
        "hit_set_history": {
            "current_last_update": "0'0",
            "history": []
        }
    },
    "peer_info": [
        {
            "peer": "0",
            "pgid": "1.39",
            "last_update": "13'500",
            "last_complete": "13'500",
            "log_tail": "0'0",
            "last_user_version": 0,
            "last_backfill": "MAX",
            "last_backfill_bitwise": 0,
            "purged_snaps": "[]",
            "history": {
                "epoch_created": 11,
                "last_epoch_started": 12,
                "last_epoch_clean": 13,
                "last_epoch_split": 0,
                "last_epoch_marked_full": 0,
                "same_up_since": 0,
                "same_interval_since": 0,
                "same_primary_since": 0,
                "last_scrub": "0'0",
                "last_scrub_stamp": "2016-03-16 21:13:08.883121",
                "last_deep_scrub": "0'0",
                "last_deep_scrub_stamp": "2016-03-16 21:13:08.883121",
                "last_clean_scrub_stamp": "0.000000"
            },
            "stats": {
                "version": "0'0",
                "reported_seq": "0",
                "reported_epoch": "0",
                "state": "inactive",
                "last_fresh": "0.000000",
                "last_change": "0.000000",
                "last_active": "0.000000",
                "last_peered": "0.000000",
                "last_clean": "0.000000",
                "last_became_active": "0.000000",
                "last_became_peered": "0.000000",
                "last_unstale": "0.000000",
                "last_undegraded": "0.000000",
                "last_fullsized": "0.000000",
                "mapping_epoch": 0,
                "log_start": "0'0",
                "ondisk_log_start": "0'0",
                "created": 0,
                "last_epoch_clean": 0,
                "parent": "0.0",
                "parent_split_bits": 0,
                "last_scrub": "0'0",
                "last_scrub_stamp": "0.000000",
                "last_deep_scrub": "0'0",
                "last_deep_scrub_stamp": "0.000000",
                "last_clean_scrub_stamp": "0.000000",
                "log_size": 0,
                "ondisk_log_size": 0,
                "stats_invalid": "0",
                "stat_sum": {
                    "num_bytes": 0,
                    "num_objects": 0,
                    "num_object_clones": 0,
                    "num_object_copies": 0,
                    "num_objects_missing_on_primary": 0,
                    "num_objects_degraded": 0,
                    "num_objects_misplaced": 0,
                    "num_objects_unfound": 0,
                    "num_objects_dirty": 0,
                    "num_whiteouts": 0,
                    "num_read": 0,
                    "num_read_kb": 0,
                    "num_write": 0,
                    "num_write_kb": 0,
                    "num_scrub_errors": 0,
                    "num_shallow_scrub_errors": 0,
                    "num_deep_scrub_errors": 0,
                    "num_objects_recovered": 0,
                    "num_bytes_recovered": 0,
                    "num_keys_recovered": 0,
                    "num_objects_omap": 0,
                    "num_objects_hit_set_archive": 0,
                    "num_bytes_hit_set_archive": 0,
                    "num_flush": 0,
                    "num_flush_kb": 0,
                    "num_evict": 0,
                    "num_evict_kb": 0,
                    "num_promote": 0,
                    "num_flush_mode_high": 0,
                    "num_flush_mode_low": 0,
                    "num_evict_mode_some": 0,
                    "num_evict_mode_full": 0
                },
                "up": [],
                "acting": [],
                "blocked_by": [],
                "up_primary": -1,
                "acting_primary": -1
            },
            "empty": 0,
            "dne": 0,
            "incomplete": 0,
            "last_epoch_started": 12,
            "hit_set_history": {
                "current_last_update": "0'0",
                "history": []
            }
        }
    ],
    "recovery_state": [
        {
            "name": "Started\/Primary\/Active",
            "enter_time": "2016-03-16 21:13:36.769083",
            "might_have_unfound": [],
            "recovery_progress": {
                "backfill_targets": [],
                "waiting_on_backfill": [],
                "last_backfill_started": "MIN",
                "backfill_info": {
                    "begin": "MIN",
                    "end": "MIN",
                    "objects": []
                },
                "peer_backfill_info": [],
                "backfills_in_flight": [],
                "recovering": [],
                "pg_backend": {
                    "pull_from_peer": [],
                    "pushing": []
                }
            },
            "scrub": {
                "scrubber.epoch_start": "0",
                "scrubber.active": 0,
                "scrubber.waiting_on": 0,
                "scrubber.waiting_on_whom": []
            }
        },
        {
            "name": "Started",
            "enter_time": "2016-03-16 21:13:09.216260"
        }
    ],
    "agent_state": {}
}

That’s all for now.
Regards, Eduardo Gonzalez

Ansible ini_file module, simplifying your DevOps life

If you don’t read docs, one day you’ll realize that your an idiot as i am|was.

A few days back, I’ve realized that i was using wrong all Ansible modules power since i started with it. What happened?

Most of the time i use Ansible is related to OpenStack configuration jobs. Almost, all OpenStack projects use INI formatted files for their configuration files.
When i started using Ansible, I searched on Google how to configure any kind of file with Ansible modules. Almost all blogs/forums that i saw, talked about lineinfile module. So i used these guidelines on my next few months, now i realize that i was using in the wrong way Ansible modules.

Ansible have a module called ini_file, you change values inside INI formatted files in a easy way , you don’t need to use complicated regular expressions to change a value in a file.

Here you have ini_file module usage docs: http://docs.ansible.com/ansible/ini_file_module.html

We are going to change Neutron user password in his dump config file, so we create a simple task on which we can see how ini_file module can be used.

- hosts: localhost
  tasks:
  - name: Change neutron user password
    ini_file:
      dest: ~/neutron.conf
      section: keystone_authtoken
      option: password
      value: 12345

Once the task has been applied, we can see how the values are applied in a proper ini style.

cat neutron.conf
[keystone_authtoken]
password = 12345

How many times you need to make a change in an INI formatted configuration file with Ansible and used lineinfile module?
If the answer is many times, it’s OK, you are a dump like me.

Regards, Eduardo Gonzalez

1 2 3 4 11
%d bloggers like this: