Automating provisioning with Ansible – using modules

In the previous post, we have learned the basics of Ansible and how to use Ansible to execute a command – represented by a module – on a group of remote hosts. In this post, we will look into some useful modules in a bit more detail and learn a bit more on idempotency and state.

Installing software with the apt module

Maybe one of the most common scenarios that you will encounter when using Ansible is that you want to install a certain set of packages on a set of remote hosts. Thus, you want to use Ansible to invoke a packet manager. Ansible comes with modules for all major packet managers – yum for RedHat systems, apt for Debian derived systems like Ubuntu, or even the Solaris packet managers. In our example, we assume that you have a set of Ubuntu hosts on which you need to install a certain package, say Docker. Assuming that you have these hosts in your inventory in the group servers, the command to do this is as follows.

export ANSIBLE_HOST_KEY_CHECKING=False
ansible servers \
  -i hosts.ini \
  -u root \
  --private-key ~/.ssh/do_k8s \
  -m apt \
  -a 'name=docker.io update_cache=yes state=present'

Most options of this command are as in my previous post. We use a defined private key and a defined inventory file and instruct Ansible to use the user root when logging into the machines. With the switch -m, we ask Ansible to run the apt module. With the switch -a, we pass a set of parameters to this module, i.e. a set of key-value pairs. Let us look at these parameters in detail.

The first parameter, name, is simply the package that we want to install. The parameter update_cache instructs apt to update the package information before installing the package, which is the equivalent of apt-get update. The last parameter is the parameter state. This defines the target state that we want to achieve. In our case, we want the package to be present.

When we run this command first for a newly provisioned host, chances are that the host does not yet have Docker installed, and the apt module will install it. We will receive a rather lengthy JSON formatted response, starting as follows.

206.189.56.178 | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "cache_update_time": 1569866674,
    "cache_updated": true,
    "changed": true,
    "stderr": "",
    "stderr_lines": [],
    ...

In the output, we see the line "changed" : true, which indicates that Ansible has actually changed the state of the target system. This is what we expect – the package was not yet installed, we want the latest version to be installed, and this requires a change.

Now let us run this command a second time. This time, the output will be considerably shorter.

206.189.56.178 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "cache_update_time": 1569866855,
    "cache_updated": true,
    "changed": false
}

Now the attribute changed in the JSON output is set to false. In fact, the module has first determined the current state and found that the package is already installed. It has then looked at the target state and found that current state and target state are equal. Therefore, no action is necessary and the module does not perform any actual change.

This example highlights two very important and very general (and related) properties of Ansible modules. First, they are idempotent, i.e. executing them more than once results in the same state as executing them only once. This is very useful if you manage groups of hosts. If the command fails for one host, you can simply execute it again for all hosts, and it will not do any harm on the hosts where the first run worked.

Second, an Ansible task does (in general, there are exceptions like the command module) not specify an action, but a target state. The module is then supposed to determine the current state, compare it to the target state and only take those actions necessary to move the system from the current state to the target state.

These are very general principles, and the exact way how they are implemented is specific to the module in question. Let us look at a second example to see how this works in practice.

Working with files

Next, we will look at some modules that allow us to work with files. First, there is a module called copy that can be used to copy a file from the control host (i.e. the host on which Ansible is running) to all nodes. The following commands will create a local file called test.asc and distribute it to all nodes in the servers group in the inventory, using /tmp/test.asc as target location.

export ANSIBLE_HOST_KEY_CHECKING=False
echo "Hello World!" > test.asc
ansible servers \
  -i hosts.ini \
  -u root \
  --private-key ~/.ssh/do_k8s \
  -m copy \
  -a 'src=test.asc dest=/tmp/test.asc'

After we have executed this command, let us look at the first host in the inventory to see that this has worked. We first use grep to strip off the first line of the inventory file, then awk to get the first IP address and finally scp to manually copy the file back from the target machine to a local file test.asc.check. We can then look at this file to see that is contains what we expect.

ip=$(cat hosts.ini  | grep -v "\[" | awk 'NR == 1')
scp -i ~/.keys/k8s_token \
    root@$ip:/tmp/test.asc test.asc.check
cat test.asc.check

Again, it is instructive to run the copy command twice. If we do this, we see that as long as the local version of the file is unchanged, Ansible will not repeat the operation and leave the destination file alone. Again, this follows the state principle – as the node is already in the target state (file present), no action is necessary. If, however, we change the local file and resubmit the command, Ansible will again detect a deviation from the target state and copy the changed file again.

It is also possible to fetch files from the nodes. This is done with the fetch module. As this will in general produce more than one file (one per node), the destination that we specify is a directory and Ansible will create one subdirectory for each node on the local machine and place the files there.

export ANSIBLE_HOST_KEY_CHECKING=False
ansible servers \
  -i hosts.ini \
  -u root \
  --private-key ~/.ssh/do_k8s \
  -m fetch \
  -a 'src=/tmp/test.asc dest=.'

Creating users and ssh keys

When provisioning a new host, maybe the first thing that you want to do is to create a new user on the host and provide SSH keys to be able to use this user going forward. This is especially important when using a platform like DigitalOcean which by default only creates a root account.

The first step is to actually create the user on the hosts, and of course there is an Ansible module for this – the user module. The following command will create a user called ansible_user on all hosts in the servers group.

export ANSIBLE_HOST_KEY_CHECKING=False
ansible servers \
  -i hosts.ini \
  -u root \
  --private-key ~/.ssh/do_k8s \
  -m user \
  -a 'name=ansible_user'

To use this user via Ansible, we need to create an SSH key pair and distribute it. Recall that when SSH keys are used to log in to host A (“server”) from host B (“client”), the public key needs to be present on host A, whereas the private key is on host B. On host A, the key needs to be appended to the file authorized_keys in the .ssh directory of the users home directory.

So let us first create our key pair. Assuming that you have OpenSSH installed on your local machine, the following command will create a private / public RSA key pair with 2048 bit key length and no passphrase and store it in ~/.keys in two files – the private key will be in ansible_keys, the public key in ansible_keys.pub.

ssh-keygen -b 2048 -f ~/.keys/ansible_key -t rsa -P ""

Next, we want to distribute the public key to all hosts in our inventory, i.e. for each host, we want to append the key to the file authorized_keys in the users SSH directory. In our case, instead of appending the key, we can simply overwrite the authorized_keys file with our newly created public key. Of course, we can use Ansible for this task. First, we use the file module to create a directory .ssh in the new users home directory. Note that we still use our initial user root and the corresponding private key.

ansible servers \
  -i hosts.ini \
  -u root \
  --private-key ~/.ssh/do_k8s \
  -m file \
  -a 'path=/home/ansible_user/.ssh owner=ansible_user mode=0700 state=directory group=ansible_user'

Next, we use the copy module to copy the public key into the newly created directory with target file name authorized_keys (there is also a module called authorized_keys that we could use for that purpose).

ansible servers \
  -i hosts.ini \
  -u root \
  --private-key ~/.ssh/do_k8s \
  -m copy \
  -a 'src=~/.keys/ansible_key.pub dest=/home/ansible_user/.ssh/authorized_keys group=ansible_user owner=ansible_user mode=0700'

We can verify that this worked using again the ping module. This time, we use the newly created Ansible user and the corresponding private key.

ansible servers \
  -i hosts.ini \
  -u ansible_user \
  --private-key ~/.keys/ansible_key \
  -m ping

However, there is still a problem. For many purposes, our user is only useful if it has the right to use sudo without a password. There are several ways to achieve this, the easiest being to add the following line to the file /etc/sudoers which will allow the user ansible_user to run any command using sudo without a password.

ansible_user ALL = NOPASSWD: ALL

How to add this line? Again, there is an Ansible module that comes to our rescue – lineinfile. This module can be used to enforce the presence of defined lines in a file, at specified locations. The documentation has all the options, but for our purpose, the usage is rather simple.

ansible servers \
  -i hosts.ini \
  -u root \
  --private-key ~/.ssh/do_k8s \
  -m lineinfile \
  -a 'path=/etc/sudoers state=present line="ansible_user      ALL = NOPASSWD: ALL"'

Again, this module follows the principles of state and idempotency – if you execute this command twice, the output will indicate that the second execution does not result in any changes on the target system.

So far, we have used the ansible command line client to execute commands ad-hoc. This, however, is not the usual way of using Ansible. Instead, you would typically compose a playbook that contains the necessary commands and use Ansible to run this playbook. In the next post, we will learn how to create and deal with playbooks.

One thought on “Automating provisioning with Ansible – using modules

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s