When building more sophisticated Ansible playbooks, you will find that a linear execution is not always sufficient, and the need for more advanced control structures and ways to organize your playbooks arises. In this post, we will cover the corresponding mechanisms that Ansible has up its sleeves.
Loops
Ansible allows you to build loops that execute a task more than once with different parameters. As a simple example, consider the following playbook.
--- - hosts: localhost connection: local tasks: - name: Loop with a static list of items debug: msg: "{{ item }}" loop: - A - B
This playbook contains only one play. In the play, we first limit the execution to localhost and then use the connection keyword to instruct Ansible not to use ssh but execute commands directly to speed up execution (which of course only works for localhost). Note that more generally, the connection keyword specifies a so-called connection plugin of which Ansible offers a few more than just ssh and local.
The next task has an additional keyword loop. The value of this attribute is a list in YAML format, having the elements A and B. The loop keyword instructs Ansible to run this task twice per host. For each execution, it will assign the corresponding value of the loop variable to the built-in variable item so that it can be evaluated within the loop.
If you run this playbook, you will get the following output.
PLAY [localhost] ************************************* TASK [Gathering Facts]******************************** ok: [localhost] TASK [Loop with a static list of items] ************** ok: [localhost] => (item=A) => { "msg": "A" } ok: [localhost] => (item=B) => { "msg": "B" }
So we see that even though there is only one host, the loop body has been executed twice, and within each iteration, the expression {{ item }}
evaluates to the corresponding item in the list over which we loop.
Loops are most useful in combination with Jinja2 expressions. In fact, you can iterate over everything that evaluates to a list. The following example demonstrates this. Here, we loop over all environment variables. To do this, we use Jinja2 and Ansible facts to get a dictionary with all environment variables, then convert this to a list using the item() method and use this list as argument to loop.
--- - hosts: localhost connection: local tasks: - name: Loop over all environment variables debug: msg: "{{ item.0 }}" loop: "{{ ansible_facts.env.items() | list }}" loop_control: label: "{{ item.0 }}"
This task also demonstrates loop controls. This is just a set of attributes that we can specify to control the behaviour of the loop. In this case, we set the label attribute which specifies the output that Ansible prints out for each loop iteration. The default is to print the entire item, but this can quickly get messy if your item is a complex data structure.
You might ask yourself why we use the list filter in the Jinja2 expression – after all, the items() method should return a list. In fact, this is only true for Python 2. I found that in Python 3, this method returns something called a dictionary view, which we then need to convert into a list using the filter. This version will work with both Python 2 and Python 3.
Also note that when you use loops in combination with register, the registered variable will be populated with the results of all loop iterations. To this end, Ansible will populate the variable to which the register refers with a list results. This list will contain one entry for each loop iteration, and each entry will contain the item and the module-specific output. To see this in action, run my example playbook and have a look at its output.
Conditions
In addition to loops, Ansible allows you to execute specific tasks only if a certain condition is met. You could, for instance, have a task that populates a variable, and then execute a subsequent task only if this variable has a certain value.
The below example is a bit artificial, but demonstrates this nicely. In a first task, we create a random value, either 0 or 1, using Jinja2 templating. Then, we execute a second task only if this value is equal to one.
--- - hosts: localhost connection: local tasks: - name: Populate variable with random value set_fact: randomVar: "{{ range(0, 2) | random }}" - name: Print value debug: var: randomVar - name: Execute additional statement if randomVar is 1 debug: msg: "Variable is equal to one" when: randomVar == "1"
To create our random value, we combine the Jinja2 range method with the random filter which picks a random element from a list. Note that the result will be a string, either “0” or “1”. In the last task within this play, we then use the when keyword to restrict execution based on a condition. This condition is in Jinja2 syntax (without the curly braces, which Ansible will add automatically) and can be any valid expression which evaluates to either true or false. If the condition evaluates to false, then the task will be skipped (for this host).
Care needs to be taken when combining loops with conditional execution. Let us take a look at the following example.
--- - hosts: localhost connection: local - name: Combine loop with when debug: msg: "{{ item }}" loop: "{{ range (0, 3)|list }}" when: item == 1 register: loopOutput - debug: var: loopOutput
Here, the condition specified with when is evaluated once for each loop iteration, and if it evaluates to false, the iteration is skipped. However, the results array in the output still contains an item for this iteration. When working with the output, you might want to evaluate the skipped attribute for each item which will tell you whether the loop iteration was actually executed (be careful when accessing this, as it is not present for those items that were not skipped).
Handlers
Handlers provide a different approach to conditional execution. Suppose, for instance, you have a sequence of tasks that each update the configuration file of a web server. When this actually resulted in a change, you want to restart the web server. To achieve this, you can define a handler.
Handlers are similar to tasks, with the difference that in order to be executed, they need to be notified. The actual execution is queued until the end of the playbook, and even if the handler is notified more than once, it is only executed once (per host). To instruct Ansible to notify a handler, we can use the notify attribute for the respective task. Ansible will, however, only notify the handler if the task results in a change. To illustrate this, let us take a look at the following playbook.
--- - hosts: localhost connection: local tasks: - name: Create empty file copy: content: "" dest: test force: no notify: handle new file handlers: - name: handle new file debug: msg: "Handler invoked"
This playbook defines one task and one handler. Handlers are defined in a separate section of the playbook which contains a list of handlers, similarly to tasks. They are structured as tasks, having a name and an action. To notify a handler, we add the notify attribute to a task, using exactly the name of the handler as argument.
In our example, the first (and only) task of the play will create an empty file test, but only if the file is not yet present. Thus if you run this playbook for the first time (or manually remove the file), this task will result in a change. If you run the playbook once more, it will not result in a change.
Correspondingly, upon the first execution, the handler will be notified and execute at the end of the play. For subsequent executions, the handler will not run.
Handler names should be unique, so you cannot use handler names to make a handler listen for different events. There is, however, a way to decouple handlers and notifications using topics. To make a handler listen for a topic, we can add the listen keyword to the handler, and use the name of the topic instead of the handler name when defining the notificiation, see the documentation for details.
Handlers can be problematic, as they are change triggered, not state triggered. If, for example, a task results in a change and notifies a handler, but the playbook fails in a later task, the trigger is lost. When you run the playbook again, the task will most likely not result in a change any more, and no additional notification is created. Effectively, the handler never gets executed. There is, however, a flag –force-handler to force execution of handlers even if the playbook fails.
Using roles and includes
Playbooks are a great way to automate provisioning steps, but for larger projects, they easily get a bit messy and difficult to maintain. To better organize larger projects, Ansible has the concept of a role.
Essentially, roles are snippets of tasks, variables and handlers that are organized as reusable units. Suppose, for instance, that in all your playbooks, you do a similar initial setup for all machines – create a user, distribute SSH keys and so forth. Instead of repeating the same tasks and the variables to which they refer in all your playbooks, you can organize them as a role.
What makes roles a bit more complex to use initially is that roles are backed by a defined directory structure that we need to understand first. Specifically, Ansible will look for roles in a subdirectory called (surprise) roles in the directory in which the playbook is located (it is also possible to maintain system-wide roles in /etc/ansible/roles, but it is obviously much harder to put these roles under version control). In this subdirectory, Ansible expects a directory for each role, named after the role.
Within this subdirectory, each type of objects defined by this role are placed in separate subdirectories. Each of these subdirectories contains a file main.yml that defines these objects. Not all of these directories need to be present. The most important ones that most roles will include are (see the Ansible documentation for a full list) are:
- tasks, holding the tasks that make up the role
- defaults that define default values for the variables referenced by these tasks
- vars containing additional variable definitions
- meta containing information on the role itself, see below
To initialize this structure, you can either create these directories manually, or you run
cd roles ansible-galaxy init
to let Ansible create a skeleton for you. The tool that we use here – ansible-galaxy – is actually part of an infrastructure that is used by the community to maintain a set of reusable roles hosted centrally. We will not use Ansible Galaxy here, but you might want to take a look at the Galaxy website to get an idea.
So suppose that we want to restructure the example playbook that we used in one of my last posts to bring up a Docker container as an Ansible test environment using roles. This playbook essentially has two parts. In the first part, we create the Docker container and add it to the inventory. In the second part, we create a default user within the container with a given SSH key.
Thus it makes sense to re-structure the playbook using two roles. The first role would be called createContainer, the second defaultUser. Our directory structure would then look as follows.
Here, docker.yaml is the main playbook that will use the role, and we have skipped the var directory for the second role as it is not needed. Let us go through these files one by one.
The first file, docker.yaml, is now much shorter as it only refers to the two roles.
--- - name: Bring up a Docker container hosts: localhost roles: - createContainer - name: Provision hosts hosts: docker_nodes become: yes roles: - defaultUser
Again, we define two plays in this playbook. However, there are no tasks defined in any of these plays. Instead, we reference roles. When running such a play, Ansible will go through the roles and for each role:
- Load the tasks defined in the main.yml file in the tasks-folder of the role and add them to the play
- Load the default variables defined in the main.yml file in the defaults-folder of the role
- Merge these variable definitions with those defined in the vars-folder, so that these variables will overwrite the defaults (see also the precedence rules for variables in the documentation
Thus all tasks imported from roles will be run before any tasks defined in the playbook directly are executed.
Let us now take a look at the files for each of the roles. The tasks, for instance, are simply a list of tasks that you would also put into a play directly, for instance
--- - name: Run Docker container docker_container: auto_remove: yes detach: yes name: myTestNode image: node:latest network_mode: bridge state: started register: dockerData
Note that this is a perfectly valid YAML list, as you would expect it. Similarly, the variables you define either as default or as override are simple dictionaries.
--- # defaults file for defaultUser userName: chr userPrivateKeyFile: ~/.ssh/ansible-default-user-key_rsa
The only exception is the role meta data main.yml file. This is a specific YAML-based syntax that defines some attributes of the role itself. Most of the information within this file is meant to be used when you decide to publish your role on Galaxy and not needed if you work locally. There is one important exception, though – you can define dependencies between roles that will be evaluated and make sure that roles that are required for a role to execute are automatically pulled into the playbook when needed.
The full example, including the definitions of these two roles and the related variables, can be found in my GitHub repository. To run the example, use the following steps.
# Clone my repository and cd into partVI git clone https://github.com/christianb93/ansible-samples cd ansible-samples/partVI # Create two new SSH keys. The first key is used for the initial # container setup, the public key is baked into the container ssh-keygen -f ansible -b 2048 -t rsa -P "" # the second key is the key for the default user that our # playbook will create within the container ssh-keygen -f default-user-key -b 2048 -t rsa -P "" # Get the Dockerfile from my earlier post and build the container, using # the new key ansible just created cp ../partV/Dockerfile . docker build -t node . # Put location of user SSH key into vars/main.yml for # the defaultUser role mkdir roles/defaultUser/vars echo "---" > roles/defaultUser/vars/main.yml echo "userPrivateKeyFile: $(pwd)/default-user-key" >> roles/defaultUser/vars/main.yml # Run playbook export ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook docker.yaml
If everything worked, you can now log into your newly provisioned container with the new default user using
ssh -i default-user-key chr@172.17.0.2
where of course you need to replace the IP address by the IP address assigned to the Docker container (which the playbook will print for you). Note that we have demonstrated variable overriding in action – we have created a variable file for the role defaultUser and put the specific value, namely the location of the default users SSH key – into it, overriding the default coming with the role. This ability to bundle all required variables as part of a role but at the same time allowing a user to override them is it what makes roles suitable for actual code reuse. We could have overwritten the variable as well in the playbook using it, making the use of roles a bit similar to a function call in a procedural programming language.
This completes our post on roles. In the next few posts, we will now tackle some more complex projects – provisioning infrastructure in a cloud environment with Ansible.