For my projects, I often need a clean Linux box with a defined state which I can use to play around and, if a make a mistake, simply dump it again. Of course, I use a cloud environment for this purpose. However, I often find myself logging into one of these newly created machines to carry out installations manually. Time to learn how to automate this.
Ansible – the basics
Ansible is a platform designed to automate the provisioning and installation of virtual machines (or, more general, any type of servers, including physical machines and even containers). Ansible is agent-less, meaning that in order to manage a virtual machine, it does not require any special agent installed on that machine. Instead, it uses ssh to access the machine (in the Linux / Unix world, whereas with Windows, you would use something like WinRM).
When you use Ansible, you run the Ansible engine on a dedicated machine, on which Ansible itself needs to be installed. This machine is called the control machine. From this control machine, you manage one or more remote machines, also called hosts. Ansible will connect to these nodes via SSH and execute the required commands there. Obviously, Ansible needs a list of all hosts that it needs to manage, which is called the inventory. Inventories can be static, i.e. an inventory is simply a file listing all nodes, or can be dynamic, implemented either as a script that supplies a JSON-formatted list of nodes and is invoked by Ansible, or a plugin written in Python. This is especially useful when using Ansible with cloud environments where the list of nodes changes over time as nodes are being brought up or are shut down.
The “thing that is actually executed” on a node is, at the end of a day, a Python script. These scripts are called Ansible modules. You might ask yourself how the module can be invoked on the node. The answer is simple – Ansible copies the module to the node, executes it and then removes it again. When you use Ansible, then, essentially, you execute a list of tasks for each node or group of nodes, and a task is a call to a module with specific arguments.
A sequence of tasks to be applied to a specific group of nodes is called a play, and the file that specifies a set of plays is called a playbook. Thus a playbook allows you to apply a defined sequence of module calls to a set of nodes in a repeatable fashion. Playbooks are just plain files in YAML-format, and as such can be kept under version control and can be treated according to the principles of infrastructure-as-a-code.
Installation and first steps
Being agentless, Ansible is rather simple to install. In fact, Ansible is written in Python (with the source code of the community version being available on GitHub), and can be installed with
pip3 install --user ansible
Next, we need a host. For this post, I did simply spin up a host on DigitalOcean. In my example, the host has been assigned the IP address 18.104.22.168. We need to put this IP address into an inventory file, to make it visible to Ansible. A simple inventory file (in INI format, YAML is also supported) looks as follows.
Here, servers is a label that we can use to group nodes if we need to manage nodes with different profiles (for instance webservers, application servers or database servers).
Of course, you would in general not create this file manually. We will look at dynamic inventories in a later post, but for the time being, you can use a version of the script that I used in an earlier post on DigitalOcean to automate the provisioning of droplets. This script brings up a machine and adds it to a repository file hosts.ini.
With this inventory file in place, we can already use Ansible to ad-hoc execute commands on this node. Here is a simple example.
export ANSIBLE_HOST_KEY_CHECKING=False ansible -i hosts.ini \ servers \ -u root \ --private-key ~/.ssh/do_k8s \ -m ping
Let us go through this example step by step. First, we set an environment variable which prevents the SSH host key check when first connecting to an unknown host. Next, we invoke the ansible command line client. The first parameter is the name of the inventory file,
host.ini in our case. The second arguments (
servers) instructs Ansible only to run our command for those hosts that carry the label
servers in our inventory file. This allows us to use different node profiles and apply different commands to them. Next, the -u switch asks Ansible to use the root user to connect to our hosts (which is the default ssh user on DigitalOcean). The next switch specifies the file in which the private host key that Ansible will use to connect to the node is stored.
Finally, the switch -m specifies the module to use. For this example, we use the ping module which tries to establish a connection to the host (you can look at the source code of this module here).
Modules can also take arguments. A very versatile module is the
command module (which is actually the default, if no module is specified). This module accepts an argument which it will simply execute as a command on the node. For instance, the following code snippet executes
uname -a on all nodes.
export ANSIBLE_HOST_KEY_CHECKING=False ansible -i hosts.ini \ servers \ -u root \ --private-key ~/.ssh/do_k8s \ -m command \ -a "uname -a"
Our examples so far have assumed that the user that Ansible uses to SSH into the machines has all the required privileges, for instance to run apt. On DigitalOcean, the standard user is root, where this simple approach is working, but in other cloud environments, like EC2, the situation is different.
Suppose, for instance, that we have created an EC2 instance using the Amazon Linux AMI (which has a default user called ec2-user) and added its IP address to our hosts.ini file (of course, I have written a script to automate this). Let us also assume that the SSH key used is called ansibleTest and stored in the directory
~/.keys. Then, a ping would look as follows.
export ANSIBLE_HOST_KEY_CHECKING=False ansible -i hosts.ini \ servers \ -u ec2-user \ --private-key ~/.keys/ansibleTest.pem \ -m ping
This would actually work, but this changes if we want to install a package like Docker. We will learn more about package installation and state in the next post, but naively, the command to actually install the package would be as follows.
export ANSIBLE_HOST_KEY_CHECKING=False ansible -i hosts.ini \ servers \ -u ec2-user \ --private-key ~/.keys/ansibleTest.pem \ -m yum -a 'name=docker state=present'
Here, we use the yum module which is able to leverage the YUM packet manager used by Amazon Linux to install, upgrade, remove and manage packages. However, this will fail, and you will receive a message like ‘You need to be root to perform this command’. This happens because Ansible will SSH into your machine using the ec2_user, and this user, by default, does not have the necessary privileges to run yum.
On the command line, you would of course use
sudo to fix this. To instruct Ansible to do this, we have to add the switch -b to our command. Ansible will then use sudo to execute the modules as root.
export ANSIBLE_HOST_KEY_CHECKING=False ansible -i hosts.ini \ servers \ -b \ -u ec2-user \ --private-key ~/.keys/ansibleTest.pem \ -m yum -a 'name=docker state=present'
An additional parameter, –become-user, can be used to control which user Ansible will use to run the operations. The default is root, but any other user can be used as well (assuming, of course, that the user has the required privileges for the respective command).
The examples above still look like something that could easily be done with a simple shell script, looping over all hosts in an inventory file and invoking an ssh command. The real power of Ansible, however, becomes apparent when we start to talk about modules in the next post in this series.