When we are playing with bitcoin transactions, we need some playground where making a mistake does not cost us real bitcoins and therefore money. In addition, we might want to play around with more than one bitcoin server to see how networkings works and how the messages are exchanged in the bitcoin peer-to-peer network.
There are several ways to do this. First, we could run more than one bitcoin server locally, and use different data directories and different configuration files and ports to separate them. However, there is a different option – we can use Docker container. This will make it ease to spin up as many instances as needed while only having to put together the configuration once, supports networking and allows us to easily clean up after we have tried something and restart our scenarios from a well defined state. In this post, I will guide you through the steps needed to build and install the bitcoin core software inside a container. This post is not meant to be a general introduction into container technology and Docker, but do not worry – I will not assume that you are familiar with the concepts, but we will start from scratch. However, if you want to understands the basics, there are many good posts out there, like this introduction or the Docker overview page which is part of the official documentation.
First, we will need to make sure that we have the latest Docker CE installed. On Ubuntu 16.04, this requires the following steps. First, we need to remove any potentially existing docker versions.
$ sudo apt-get remove docker docker-engine docker.io
Next we will add the Docker repository to the repository list maintained by APT to be able to install from there. So we first use curl to get the public key from the docker page, feed this into the apt-key tool and then add the repository.
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - $ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable"
We can then install docker from there and make sure that it is started at boot time using
$ sudo apt-get update $ sudo apt-get install docker-ce $ sudo systemctl enable docker
You also should make sure that there is a docker group to which your user is added – use the command groups
to verify that this is the case and follow the instructions on the Docker page if not.
When everything is done, let us test our installation. We will ask docker to download and run the hello-world minimal image.
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:97ce6fa4b6cdc0790cda65fe7290b74cfebd9fa0c9b8c38e979330d547d22ce1
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://cloud.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
Now we can start with the actual installation. At the end, of course, there will be a Dockerfile – but I will guide you through the individual steps one by one and we will create the final file iteratively. If you would only like to see the result, you can skip to the last paragraph of this post that explains how to retrieve this file from my GitHub repository and to use it … but if course you are more than welcome to join me on the way there.
Actually, we will even do a bit more than we would need if we only wanted to have a running bitcoind, but this post is also meant as a refresher if you have built docker images before and as a hands-on introduction if you have never done this.
As a basis, we will use the Alpine Linux distribution. From there, we have several options to proceed. Of course, the Alpine Linux distribution has a docker package, but simply that or any other precompiled binary would not give us the flexibility that we need – it is for instance extremely useful to be able to add debugging output to the source code if something is not as expected. Besides, it is more fun to compile from scratch. So our target will be to pull the source code, compile and install in a container.
Based on the Alpine Linux container, we will first create several container images with separate docker files that are based on each other, which will make it easier to iterate and check things manually as we go. In the end, we will assemble everything into one docker file.
As a first step, we add a few libraries to the Alpine image that we need to be able to fetch and compile the bitcoin source code. We call the target image alpine-dev as it can be reused as a development environment for other purposes as well. Here is the docker file Dockerfile.dev
FROM alpine RUN apk update && apk add git \ make \ file \ autoconf \ automake \ build-base \ libtool \ db-c++ \ db-dev \ boost-system \ boost-program_options \ boost-filesystem \ boost-dev \ libressl-dev \ libevent-dev
Save this file somewhere in a local directory (ideally, this directory is otherwise empty as during the build process, its context will be transferred to the docker daemon and used as build context, so we better keep this small) and then build the image using
docker build -f Dockerfile.dev -t alpine-dev .
If you now check your local repository with docker images
, you should see that the image has been added successfully (it is big, but do not worry, we fix that later).
The next image that we will be building is an image that is based on alpine-dev but contains the specific version of the bitcoin source code that we want, i.e. it pulls the code from the bitcoin directory. So its docker file is very short.
FROM alpine-dev RUN git clone https://github.com/bitcoin/bitcoin --branch v0.15.0 --single-branch
You can build this image with
docker build -f Dockerfile.build -t bitcoin-alpine-build .
Next we write the docker file that performs the actual build. This is again not difficult (but here it comes in handy that we can run a container from the image that we have just generated and play around with the configuration to arrive at the final set of options below).
FROM bitcoin-alpine-build RUN (cd bitcoin && ./autogen.sh && \ ./configure --disable-tests \ --disable-bench --disable-static \ --without-gui --disable-zmq \ --with-incompatible-bdb \ CFLAGS='-w' CXXFLAGS='-w' && \ make -j 4 && \ strip src/bitcoind && \ strip src/bitcoin-cli && \ strip src/bitcoin-tx && \ make install )
Again, we build the image and place it into our local repository using
docker build -f Dockerfile.install -t bitcoin-alpine-bin .
Let us now test our installation. We first bring up an instance of our new container.
docker run -it bitcoin-alpine-bin
Within the container, we can now run the bitcoin daemon.
bitcoind -server=1 -rest=1 -regtest -txindex=1 -daemon
Once the daemon has started, we can verify that it is up and running and ready to accept commands. We can for instance – from within the container – run bitcoin-cli -regtest getinfo
and should see some status information.
So let us now move on to write the docker file that will create the run time environment for our daemon. There is still a couple of things we need to think about. First, we will have to communicate with our bitcoin server using JSON-RPC, so we need to expose this port towards the host.
Second, we need a configuration file. We could generate this on the fly, but the easiest approach is to place this in the build context and copy it when we run the daemon.
Finally, we have to think about the RPC authorization mechanism. Typically, the bitcoin server writes a cookie file into the configuration directory which is then picked up the by client, but this does not work if the server is running inside the container and the client locally on our host system. Probably the safest way would be to use the option to provide a hashed password to the server and keep the actual password secure. We will use a different approach which is of course not recommended for production use as it could allow the world access to your wallet – we specify the username and password in the configuration file. Our full configuration file is
regtest=1 server=1 rpcbind=0.0.0.0:18332 rpcuser=user rpcpassword=password rpcallowip=0.0.0.0/0 rpcport=18332 rpcuser=user rpcpassword=password
Note that this opens the RPC port to the world and is probably not secure, but for use in a test setup this will do. Our docker file for the last stage is now
FROM bitcoin-alpine-bin # # Copy the bitcoin.conf file from # the build context into the container # COPY bitcoin.conf /bitcoin.conf # # Expose the port for the RPC interface # EXPOSE 18332/tcp # # Start the bitcoin server # ENTRYPOINT ["/usr/local/bin/bitcoind"] CMD ["-conf=/bitcoin.conf", "-regtest", "-rest=1", "-server=1", "-printtoconsole", "-txindex=1"]
We can now test our client. First, we build and run the container.
$ docker build -f Dockerfile.run -t bitcoin-alpine-run .
$ docker run --rm -it -p 18332:18332 bitcoin-alpine-run
Then, in a second terminal, let us connect. We assume that you have an installation of the bitcoin-cli client on the host machine as well. Run
$ bitcoin-cli -regtest -rpcuser=user -rpcpassword=password getinfo
You should now see the usual output of the getinfo command similar to the test that we have done before directly in the container.
That is nice, but there is a serious issue. If you look at the image that we have just created using docker images
, you will most likely be shocked – our container is more than 800 Mb in size. This is a problem. The reason is that our image contains everything that we have left behind during the build process – the development environment, header files, the source code, object files and so on. It would be much nicer if we could build a clean image that only contains the executables and libraries needed at runtime.
Fortunately, current versions of Docker offer a feature called multi-stage build which is exactly what we need. Let us take a look at the following docker file to explain this.
FROM bitcoin-alpine-bin as build RUN echo "In build stage" FROM alpine # # Copy the binaries from the build to our new container # COPY --from=build /usr/local/bin/bitcoind /usr/local/bin # # Install all dependencies # RUN apk update && apk add boost boost-filesystem \ boost-program_options \ boost-system boost-thread busybox db-c++ \ libevent libgcc libressl2.6-libcrypto \ libstdc++ musl # # Copy the bitcoin.conf file from # the build context into the container # COPY bitcoin.conf /bitcoin.conf # # Expose the port for the RPC interface # EXPOSE 18332/tcp # # Start the bitcoin server # ENTRYPOINT ["/usr/local/bin/bitcoind"] CMD ["-conf=/bitcoin.conf", "-regtest", "-rest=1", "-server=1", "-printtoconsole", "-txindex=1"]
We see that this docker file has two FROM
statements. When it is executed, it starts with the image specified by the first FROM
statement. In our case, this is the bin image that did already contain the executable binaries. In this stage, we do nothing in our simple example but simply print a message. Then, with the second FROM command, Docker will start a new image based on the raw Alpine image from scratch. However – and this is the magic – we still have access to the files from the first image, and we can copy them to our new image using the --from
specifier. This is what we do here – we copy the executable into our new container. Then we add only the runtime libraries that we really need and the configuration file.
This gives us a nice and small container – when I checked it was below 30MB!
In further posts, I will assume that you have build this container and made available in your local Docker repository as bitcoin-alpine
. If you have not followed all the steps of this post, you can simply pull a docker file that merges all the files explained above in one file and use it to build this container as follows.
$ git clone https://github.com/christianb93/bitcoin $ cd bitcoin/docker $ docker build --rm -f Dockerfile -t bitcoin-alpine .
In the version of the dockerfile on Github, I have also added a startup script that overwrites the default credentials if the environment variables BC_RPC_USER and BC_RPC_PASSWORD are set – this makes it easier to use custom credentials, for instance in a Kubernetes environment.
That was it for today. Starting with the next post, we will use this test installation to see how we can create and publish a full transaction in the bitcoin network.
2 Comments