Building a bitcoin controller for Kubernetes part VIII – creating a helm chart

Our bitcoin controller is more or less complete, and can be deployed into a Kubernetes cluster. However, the deployment process itself is still a bit touchy – we need to deploy secrets, RBAC profiles and the CRD, bring up a pod running the controller and make sure that we grab the right version of the involved images. In addition, doing this manually using kubectl makes it difficult to reconstruct which version is running, and environment specific configurations need to be made manually. Enter Helm….

Helm – an overview

Helm is (at the time of writing, this is supposed to change with the next major release) a combination of two different components. First, there is the helm binary itself which is a command-line interface that typically runs locally on your workstation. Second, there is Tiller which is a controller running in your Kubernetes cluster and carrying out the actual deployment.

With Helm, packages are described by charts. A chart is essentially a directory with a certain layout and a defined set of files. Specifically, a Helm chart directory contains

  • A file called Chart.yaml that contains some basic information on the package, like its name, its version, information on its maintainers and a timestamp
  • A directory called templates. The files in this directory are Kubernetes manifests that need to be applied as part of the deployment, but there is a twist – these files are actually templates, so that parts of these files can be placeholders that are filled as part of the deployment process.
  • A file called values.yaml that contains the values to be used for the placeholders

To make this a bit more tangible, suppose that you have an application which consists of several pods. One of these pods runs a Docker image, say myproject/customer-service. Each time you build this project, you will most likely create a new Docker image and push it into some repository, and you will probaby use a unique tag name to distinguish different builds from each other.

When you deploy this without Helm, you would have to put the tag number into the YAML manifest file that describes the deployment. With every new build, you would then have to update the manifest file as well. In addition, if this tag shows up in more than one place, you would have to do this several times.

With Helm, you would not put the actual tag into the deployment manifest, but use a placeholder. These placeholders follow the Go template syntax. Instead, you would put the actual tag into the values.yaml file. Thus the respective lines in your deployment would be something like

containers:
  - name: customer-service-ctr
    image: myproject/customer-service:{{.Values.customer_service_image_tag}}

and within the file values.yaml, you would provide a value for the tag

customer_service_image_tag: 0.7

When you want to install the package, you would run helm install . in the directory in which your package is located. This would trigger the process of joining the templates and the values to create actual Kubernetes manifest files and apply these files to your cluster.

Of course there is much more that you can customize in this way. As an example, let us take a look at the Apache helm chart published by Bitnami. When you read the deployment manifest in the templates folder, you will find that almost every value is parametrized. Some of these parameters – those starting with .Values – are defined in the values.yaml. Others, like those starting with .Release, refer to the current release and are auto-generated by Helm. Here, a release in the Helm terminology is simply the result of installing a chart. This does, for instance, allow you to label pods with a label that contains the release in which they were created.

Helm repositories

So far, we have described Helm charts as directories. This is one way to implement Helm charts – those of you who have worked with J2EE and WAR files before might be tempted to call this the “exploded” view. However, you can also bundle all files that comprise a chart into a single archive. These archives can be published in a Helm repository.

Technically, a Helm repository is nothing but an URL that can serve two types of objects. First, there is a collection of archives (zipped tar files with suffix .tgz). These files contain the actual Helm charts – they are literally archived versions of an exploded directory, created using the command helm package. Second, there is an index, a file called index.yaml that lists all archives contained in the repository and contains some metadata. An index file can be created using the command helm repo index.

Helm maintains a list of known repositories, similar to a package manager like APT that maintains a list of known package sources. To add a new repository to this list, use the helm repo add command.

Let us again take a look at an example. Gitlab maintains a Helm repository at the URL https://charts.gitlab.io. If you add index.yaml to this URL and submit a GET request, you will get this file. This index file contains several entries for different charts. Each chart has, among other fields, a name, a version and the URL of an archive file containing the actual chart. This archive can be on the same web server, or somewhere else. If you use Helm to install one of the charts from the repository, Helm will use the latest version that shows up in the index file, unless you specify a version explicitly.

Installing and running Helm

Let us go through an example to see how all this works. We assume that you have a running Kubernetes cluster and a kubectl pointing to this cluster.

First, let us install Helm. As explained earlier, this involves a local installation and the installation of Tiller in your cluster. The local installation depends, of course, on your operating system. Several options are available and described here.

Once the helm binary is in your path, let us install the Tiller daemon in your cluster. Before we do this, however, we need to think about service accounts. Tiller, being the component of Helm that is responsible for carrying out the actual deployment, requires the permissions to create, update and delete basically every Kubernetes resource. To make this possible, we need to define a service account that Tiller uses and map this account to a corresponding cluster role. For a test setup, we can simply use the pre-existing cluster-admin role.

$ kubectl create serviceaccount tiller -n kube-system
$ kubectl create clusterrolebinding tiller --clusterrole=cluster-admin  --serviceaccount=kube-system:tiller

Once this has been done, we can now instruct Helm to set up its local configuration and to install Tiller in the cluster using this service account.

$ helm init --service-account tiller

If you now display all pods in your cluster, you will see that a new Tiller pod has been created in the namespace kube-system (this is why we created our service account in that namespace as well).

Now we can verify that this worked. You could, for instance, run helm list which will list all releases in your cluster. This should not give you any output, as we have not yet installed anything. Let us change this – as an example, we will install the Bitnami repo referenced earlier. As a starting point, we retrieve the list of repos that Helm is currently aware of.

$ helm repo list
NAME    URL
stable  https://kubernetes-charts.storage.googleapis.com
local   http://127.0.0.1:8879/charts

These are the default repositories that Helm knows from scratch. Let us now add the Bitnami repo and then use the search function to find all Gitlab charts.

$ helm repo add bitnami https://charts.bitnami.com
$ helm search bitnami

This should give you a long list of charts that are now available for being installed, among them the chart for the Apache HTTP server. Let us install this.

$ helm install bitnami/apache

Note how we refer to the chart – we use the combination of the chart name that we have used when doing our helm repo add and the name of the chart. You should now see a summary that lists the components that have been installed as part of this release – a pod, a load balancer service and a deployment. You can also track the status of the service using kubectl get svc. After a few seconds, the load balancer should have been brough up, and you should be able to curl it.

You will also see that Helm cas created a release name for you that you can reference the release. You can now use this release name and the corresponding helm commands to delete or upgrade your release.

A helm repository for our bitcoin controller

It is easy to set up a Helm repository for the bitcoin controller. As mentioned earlier, every HTTP server will do. I use Github for this purpose. The Helm repository itself is here. It contains the archives that have been created with helm package and the index file which is the result of helm index. With that repository in place, it is very easy to install our controller – two lines will do.

$ helm repo add bitcoin-controller-repo https://raw.githubusercontent.com/christianb93/bitcoin-controller-helm/master
$ helm install bitcoin-controller-repo/bitcoin-controller

An exploded view of this archive is available in a separate Github repository. This version of the Helm chart is automatically updated as part of my CI/CD pipeline (which I will describe in more detail in a later post), and when all tests succeed, this repository is packaged and copied into the Helm repository.

This completes our short introduction to Helm and, at the same time, is my last post in the short series on Kubernetes controllers. I highly recommend to spend some time with the quite readable documentation at helm.sh to learn more about Helm templates, repositories and the life cycle of releases. If you want to learn more about controllers, my best advice is to read the source code of the controllers that are part of Kubernetes itself that demonstrate all of the mechanisms discussed so far and much more. Happy hacking!

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s