In the last few posts on the bitcoin blockchain, I have already extensively used Docker container to quickly set up test environments. However, it turned out to be a bit tiresome to run the containers, attach to them, execute commands etc. to get into a defined state. Time to learn how this can be automated easily using our beloved Python – thanks to the wonderful Docker Python SDK.
This package uses the Docker REST API and offers an intuitive object model to represent container, images, networks and so on. The API can be made available via a TCP port in the Docker configuration, but be very careful if you do this – everybody who has access to that port will have full control over your Docker engine. Fortunately, the Python package can also connect via the standard Unix domain socker on the file system which is not exposed to the outside world.
As always, you need to install the package first using
$ pip install docker
Let us now go through some objects and methods in the API one by one. At the end of the post, I will show how you a complete Python notebook that orchestrates the bitcoind container that we have used in our tests in the bitcoin series.
The first object we have to discuss is the
client. Essentially, a client encapsulates a connection to the Docker engine. Creating a client with the default configuration is very easy.
import docker client = docker.client.from_env()
The client object has only very few methods like
client.version() that return global status information. The more interesting part of this object are the collections attached to it. Let us start with images, which can be accessed via
client.images. To retrieve a specific instance, we can use
client.images.list(), passing as an argument a name or a filter. For instance, when we know that there is exactly one image called “alice:latest”, we can get a reference to it as follows.
alice = client.images.list("alice:latest")
Other commands, like
push, are the equivalents of the corresponding docker CLI commands.
Let us now turn to the
client.containers collection. Maybe the most important method that this collection offers is the
run method. For instance, to run a container and capture its output, use
output = client.containers.run("alpine", "ls", auto_remove=True)
This will run a container based on the alpine image and pass “ls” as an argument (which will effectively execute ls as alpine container will run a shell for you) and return the output as a sequence of bytes. The container will be removed after execution is complete.
detach=True, the container will run in detached mode and the call will return immediately. In this case, the returned object is not the output, but a reference to the created container which you can use later to work with the container. If, for instance, you wanted to start an instance of the alice container, you could do that using
alice = client.containers.run("alice:latest", auto_remove=True, detach=True)
You could then use the returned handle to inspect the logs (
alice.logs()), to commit, to exec code in the container similar to the
docker exec command (
alice.exec_run) and so forth.
To demonstrate the possibilities that you have, let us look at an example. The following notebook will start two instances (alice, bob) of the bitcoin-alpine image that you hopefully have build when you have followed my series on bitcoin. It then uses the collection
client.networks to figure out to which IP address on the bridge network bob is connected. Then, we attach to the alice container and run bitcoin-cli in this container to instruct the bitcoind to connect to the instance running in container bob.
We then use the bitcoin-cli running inside the container alice to move the blockchain into a defined state – we mine a few blocks, import a known private key into the wallet, transfer a certain amount to a defined address and mine a few additional blocks to confirm the transfer. Here is the notebook.
Make sure to stop all containers again when you are done, it is comparatively easy to produce a large number of stopped containers if you are not careful and use this for automated tests. I usually run the container with the
--rm flag on the command line or the
auto_remove=True flag in Python to make sure that they are removed by the Docker engine automatically when they stop.
Of course nobody would use this to simply run a few docker containers with a defined network setup, there are much better tools like Docker Swarm or other container management solutions for this. However, the advantage of using the Python SDK is that we can interact with the containers, run commands, perform tests etc. And all this can be integrated nicely into integration tests using test fixtures, for instance those provided by pytest. A fixture could bring up the environment, could be defined on module level or test level depending on the context, and can add a finalizer to shut down the environment again after the test has been executed. This allows for a very flexible test setup and offers a wide range of options for automated testing.
This post could only give a very brief introduction into the Python Docker SDK and we will not at all discuss pytest and fixtures – but I invite you to browse the Docker SDK documentation and pytest fixtures and hope you enjoy to play with this!
Do you have any idea why when I run clinet.info() I get => /usr/local/lib/python3.7/site-packages/requests/adapters.py”, line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: (‘Connection aborted.’, FileNotFoundError(2, ‘No such file or directory’))
I can only guess, but is looks like your Python library is not able to establish a connection to the Docker daemon. Are you able to run “docker info” or “docker ps” manually? And is your Linux user part of the docker group so that it can connect to the Unix domain socket that Docker uses?