Why you need statistics to understand neuronal networks

When I tried to learn about neuronal networks first, I did what probably most of us would do – I started to look for tutorials, blogs etc. on the web and was surprised by the vast amount of resources that I found. Almost every blog or webpage about neuronal networks has a section on training a simple neuronal network, maybe on the MNIST data set, using a framework like TensorFlow, Theano or MXNET. When you follow such a tutorial, a network is presented as a collection of units and weights. You see how the output of the network is calculated and then an error function – sometimes least squares, sometimes something else – is presented. Often, a regulation term is applied, and then you are being told that the automatic gradient calculation features of the framework will do the gradient descent algorithm for you and you just have to decide on an optimizer and run the network and enjoy the results.

Sooner or later, however, you will maybe start to ask a few questions. Why that particular choice of the loss function? Where does the regulator come from? What is a good initial value for the weights and why? Where does the sigmoid function come from? And many, many other questions….

If you then decide to dig deeper, using one of the many excellent textbooks or even try to read some of the original research papers (and some are actually quite readable), you will very soon be confronted with terms like entropy, maximum likelihood, posterior distribution, Gaussian mixtures and so on, and you will realize that the mathematics of neuronal networks has a strong overlap with mathematical statistics. But why? Why is that a good language to discuss neuronal networks, and why should you take the time to refresh your statistics knowledge if you really want to understand neuronal networks? In this post, I will try to argue that statistical inference comes up very naturally when you try to study neuronal networks.

Many neuronal networks are designed to excel at classification tasks. As an example, suppose you wanted to design and train a neuronal network that, given data about an animal, classifies the animal as either a bird or some other animal (called a “non-bird” for convenience). So our starting point is a set modelling all possible objects that could be presented to the network. How exactly we model this set is not so important, more important is that in general, the network will not have access to all the data about the animal, but only to certain attributes of elements in the set called features. So there could be a feature which we call X1 and which is defined as

X_1 = \text{the animal can fly}

taking values in \{0,1\}. Another data point the network could get is

X_2 = \text{length of animal in cm}

taking values in the reals and so forth. More generally, we assume that on the set of all possible objects, we have certain functions Xi taking values in, say, the real numbers. Based on these numbers, the network will then try to take a decision whether a given animal is a bird or not. Thus we do not have to deal directly with our space of objects, but use the functions Xi as primary objects.

If the network had a chance to look at every possible animal, this would be easy, even though it would cost a lot memory – it could simply remember all possible combinations of features and for each feature, store the correct answer. In reality however, this does not work. Instead, we have access to a small subset of the data – a sample – for which we can evaluate the Xi. Based on this subset, we then have to derive a model which gives the right answer in as many cases as possible. Thus we try to make a statement about the full space of things that could be presented to our network for classification based on a small sample.

This is where probabilities come into play very naturally. We need to assume that our sample has been chosen randomly, but still we need to make assertions about the full set. This is exactly what inferential statistics is doing. The fact that our sample is chosen randomly turns our Xi into random variables. Similarly, the variable

Y = \text{is a bird}

taking values in \{0,1\} is a random variable, and we try to gain information on the distribution of Y across the full population based on its values on a given set of labelled samples, i.e. a set of samples where Y is known. Thus Y would represent the labels or targets in the language of neuronal networks. Applying the methods of statistical inference to this situation would typically start by choosing a statistical model and than using estimators or hypothesis testing to make deductions.

Apart from the fact that we have to derive information on the full population based on a sample, there is another reason why probabilities appear naturally in the theory of machine learning. In many cases, the available input – being a reduction of the full set of data – is not sufficient to classify the sample with full certainty. To see this, let us go back to our examples. How would you derive the property “bird” from the given data “can fly” and “length”? Not all animals than can fly are birds – and not all birds can fly. So we have to try to distinguish for instance a butterfly from a hummingbird based on the length. The smallest hummingbird – a bee hummingbird – is about 5 cm in length. The largest known butterfly – the Queen’s Alexandra birdwing – can be as long as 8 cm (both informations taken from Wikipedia). Thus our data is not sufficient to clearly distinguish butterflies and birds in all cases!

However, very small birds and very large butterflies have one thing in common – they are rare. So chances are that a given animal that can fly and is larger than 5 cm is actually a bird (yes, I know, there are bats….). In other words, if again Y denotes the variable which is 1 on birds and 0 on all other animals, we can in general not hope that Y is a function of the Xi, but we can hope that given some values of the Xi, the probability P(Y=1) to be a bird is a function of the Xi. In other words, using the language of conditional probabilities,

P(Y=1 | X = x) = f(x)

with some unknown function f. In a Bayesian interpretation of probability, the certainty with which can say “this animal is a bird” is a function of the values xi of the observable variables Xi.

With these considerations, we now arrive at the following mathematical model for what a classification algorithm is about. We are given a probability space (P, \Omega) with a vector valued random variable X. The attributes of a sample are described by the feature vector X in some subset of m-dimensional euclidian space, where m is the number of different features. In our example, m=2, as we try to classify animals based on two properties. The result of the classification is described by a random variable Y taking – for the simple case of a binary classification problem – values in \{0,1\}. We then assume that

P(Y =1 | X=x) = f(x;w_0)

where f(\cdot;w) is a function parametrized by some parameter w that we call the weights of the model. The actual value w0 of w is unknown. Based on a sample for X and Y, we then try to fit the model, i.e. we try to find a value for w such that f(\cdot, w) models the actual conditional distribution of Y as good as possible. Once the fitting phase is completed, we can then use the model to derive predictions about objects which are not in our initial sample set.

This model sounds a bit abstract, but many feed forward neuronal networks can be described with this or similar models. And we can now apply the full machinery of mathematical statistics – we can calculate cross entropies and maximum likelihood, we can analyse converge and variance, we can apply the framework of Bayesian statistics and Monte Carlo methods. This is the reason why statistics is so essential when it comes to describing and analyzing neuronal networks. So on the next rainy Sunday afternoon, you might want to grab a steaming hot cup of coffee, head towards your arm chair and spent some time with one of the many good exposures on this topic, like chapter IV in MacKays book on Machine Learning, or Bishops “Pattern recognition and machine learning” or chapter 3 of the deep learning book by Goodfellow, Bengio and Courville.

Mining bitcoins with Python

In this post, we will learn to build a very simple miner in Python. Of course this miner will be comparatively slow and limited and only be useful in our test network, but it will hopefully help to explain the principles behind mining.

When we want to mine a block, we first need some information on the current state of the blockchain, like the hash of the current last block, the current value of the difficulty or the coin base value, i.e. the number of BTC that we earn when mining the block. When we are done building the block, we need to find a way to submit it into the bitcoin network so that it is accepted by all nodes and permanently added to the chain.

If we were member of a mining pool, there would be a mining server that would provide us the required information. As we want to build a solo mining script, we need to communicate with bitcoin core to get that information and to submit our final block. Fortunately, the RPC interface of bitcoin core offers methods to facilitate that communication that were introduced with BIP22.

First, there is the method getblocktemplate. It will deliver all the required information that we need to build a valid block and even propose transactions that we should include in the block. These transactions will be taken from the so called mempool which is a collection of transactions that the bitcoin server knows which have not been added to a block yet (see miner.cpp/BlockAssembler::addPackageTxs in the bitcoin core source code for details on how the selection process works).

If the client is done building the block, it can submit the final block using the method submitblock. This method will run a couple of checks on the block, for instance that it can be correctly decoded, that the first transaction – and only the first – is a coinbase transaction, that it is not a duplicate and that the proof-of-work is valid. If all the checks pass, it will add the block to the local copy of the blockchain. If a check fails, it will return a corresponding error code to the caller.

With that understanding, let us now write down the processing logic for our simple miner, assuming that we only want to mine one additional block. First, we will use getblocktemplate to get the basic parameters that we need and transaction that we can include. Then we will create a new block and a new coinbase transaction. We then add the coinbase transaction followed by all the other transactions to the block.

Once we have that block, we enter a loop. Within the loop, we calculate the hash of the block and compare this against the target. If we can meet the target, we are done and submit the newly created block using submitblock. Otherwise, we increment the nonce and try again.

We have discussed most of these steps in some details in the previous posts, with one exception – coinbase transactions. A coinbase transaction is, by definition, a transaction which generates bitcoin because it has valid outputs, but does not spend any UTXOs. Technically, a coinbase transaction is a transaction which (see CTransaction::IsCoinBase())

  • has exactly one transaction input
  • the previous transaction ID in this input is zero (i.e. a hexadecimal string consisting of 32 zeros “00”)
  • the index of the previous output is -1 (encoded as  0xFFFFFFFF)

As it does not refer to any output, the signature script of a coinbase transaction will never be executed. It can therefore essentially contain an arbitrary value. The only restriction defined by the protocol is described in BIP34, which defines that the first bytes of the signature script should be a valid script that consists of the height of the new block as pushed data. The remainder of the coinbase signature script (which is limited to 100 bytes in total) can be used by the miner at will.

Many miners use this freedom to solve a problem with the length of the nonce in the block header. Here, the nonce is a 32 bit value, which implies that a miner can try 232, i.e. roughly 4 billion different combinations. Modern mining hardware based on ASICs can search that range within fractions of seconds, and the current difficulty is so high that it is rather likely that no solution can be found by just changing the nonce. So we have to change other fields in the block header to try out more hashes.

What are good candidates for this? We could of course use the block creation time, but the bitcoin server validates this field and will reject the block if it deviates significantly from the current time. Instead miners typically use the coinbase signature script as an extra nonce that they modify to increase the range of possible hashes. Therefore the fields after the height are often combinations of an extra nonce and additional data, like the name of the mining pool (increasing the extra nonce is a bit less effective than increasing the nonce in the block header, as it changes the hash of the coinbase transactions and therefore forces us to recalculate the Merkle root, therefore this is most often implemented as an outer loop, with the inner loop being the increments of the nonce in the block header).

To illustrate this, let us look at an example. The coinbase signature script of the coinbase transaction in block #400020 is:

03941a060004c75ccf5604a070c007089dcd1424000202660a636b706f6f6c102f426974667572792f4249503130302f

If we decode this, we find that the first part is in fact a valid script and corresponds to the following sequence of instructions (keep in mind that all integers are encoded as little endian within the script):

OP_PUSH 400020
OP_0
OP_PUSH 1456430279
OP_PUSH 130052256
OP_PUSH 7350439741450669469

As specified by BIP34, the first pushed data is the height of that block as expected. After the OP_0, we see another push instruction, pushing the Unix system time corresponding to Thu Feb 25 20:57:59 2016, which is the creation time of the block.

The next pushed data is a bit less obvious. After looking at the source code of the used mining software, I assume that this is the nanoseconds within the second returned by the Unix system call clock_gettime. This is then followed by an eight byte integer (7350439741450669469) which is the extra nonce.

The next part of the signature script is not actually a valid script, but a string – a newline character (0xa), followed by the string “ckpool”. This is a fixed sequence of bytes that indicates the mining software used.

Finally, there is one last push operation which pushes the string “/Bitfury/BIP100/”, which tells us that the block has been mined by the Bitfury pool and that this pool supports BIP100.

Enough theory – let us put this to work! Using the utility functions in my btc Python package, it is now not difficult to write a short program that performs the actual mining.

However, we need some preparations to set up our test environment that are related to our plan to use the getblocktemplate RPC call. This call performs a few validations that can be a bit tricky in a test environment. First, it will verify that the server is connected, i.e. we need at least one peer. So we need to start two docker container, let us call them alice and bob again, and find out the IP address of the container bob in the Docker bridget network. The three following statements should do this for you.

$ docker run --rm -d -p 18332:18332 --name="alice" alice
$ docker run --rm -d  --name="bob" bitcoin-alpine
$ docker network inspect bridge | grep  -A 4  "bob" - | grep "IPv4" -

Assuming that this gives you 172.17.0.3 (replace this with whatever the result is in your case), we can now again use the addnode RPC call to connect the two nodes.

$ bitcoin-cli --rpcuser=user --rpcpassword=password -regtest addnode "172.17.0.3" add

The next validation that the bitcoin server will perform when we ask for a block template is that the local copy of the blockchain is up to date. It does by verifying that the time stamp of the last block in the chain is less than 24 hours in the past. As it is likely that a bit more time has passed since you have created the Alice container, we therefore need to use the mining functionality built into bitcoin core to create at least one new block.

$ bitcoin-cli --rpcuser=user --rpcpassword=password -regtest generate 1

Now we are ready to run our test. The next few lines will download the code from GitHub, create one transaction that will then be included in the block. We will create this transaction using the script SendMoney.py that we have already used in an earlier post.

$ git clone https://github.com/christianb93/bitcoin.git
$ cd bitcoin
$ python SendMoney.py
$ python Miner.py

You should then see an output telling you that a block with two transactions (one coinbase transaction and the transaction that we have generated) was mined, along with the previous height of the blockchain and the new height which should be higher by one.

Let us now verify that everything works. First, let us get the hash of the current last block.

$ bitcoin-cli --rpcuser=user --rpcpassword=password -regtest getchaintips
[
  {
    "height": 109,
    "hash": "07849d5c8ddcdc609d7acc3090bc48bbe4403c36008d46b5a291185334efe1bf",
    "branchlen": 0,
    "status": "active"
  }
]

Take the value from the hash field in the output and feed it into a call to getblock:

$ bitcoin-cli --rpcuser=user --rpcpassword=password -regtest getblock "07849d5c8ddcdc609d7acc3090bc48bbe4403c36008d46b5a291185334efe1bf"
{
  "hash": "07849d5c8ddcdc609d7acc3090bc48bbe4403c36008d46b5a291185334efe1bf",
  "confirmations": 1,
  "strippedsize": 367,
  "size": 367,
  "weight": 1468,
  "height": 109,
  "version": 536870912,
  "versionHex": "20000000",
  "merkleroot": "8769987458af75adc80d6792848e5cd5cb8178a9584157bb4be79b77cda95909",
  "tx": [
    "7ba3186ca8e5aae750614a3211422423a0a217f5999d0de6dfeb8968aeb01900", 
    "1094360149626a421b4ddbc7eb58a815762700316c36407770b96ffc36a7735b"
  ],
  "time": 1522952768,
  "mediantime": 1521904060,
  "nonce": 1,
  "bits": "207fffff",
  "difficulty": 4.656542373906925e-10,
  "chainwork": "00000000000000000000000000000000000000000000000000000000000000dc",
  "previousblockhash": "2277e40bf4c0ebde3fb5f38fcbd384e39df3471ad192cc46f66ea8d8d96327e7"
}

The second entry in the list tx should now match the ID of the newly created transaction which was displayed when executing the SendMoney.py script. This proves that our new transaction has been included in the block.

Congratulations, you have just mined 50 BTC – unfortunately only in your local installation, not in the production network. Of course, real miners work differently, using mining pools to split the work between many different nodes and modern ASICS to achieve the hash rates that you need to be successful in the production network. But at least we have built a simple miner more or less from scratch, relying only on the content of this and the previous posts in this series and without using any of the many Python bitcoin libraries that are out there.

This concludes my current series on the bitcoin blockchain –  I hope you enjoyed the posts and had a bit of fun. If you want to learn more, here are a few excellent sources of information that I recommend.

  1. Of course the ultimative source of information is always the bitcoin core source code itself that we have already consulted several times
  2. The Bitcoin wiki contains many excellent pages on most of what we have discussed
  3. There is of course the original bitcoin paper which you should now be able to read and understand
  4. and of course there are tons of good books out there, I personally liked Mastering Bitcoin by A. Antonopoulos which is also available online

 

 

The Metropolis-Hastings algorithm

In this post, we will investigate the Metropolis-Hastings algorithm, which is still one of the most popular algorithms in the field of Markov chain Monte Carlo methods, even though its first appearence (see [1]) happened in 1953, more than 60 years in the past. It does for instance appear on the CiSe top ten list of the most important algorithms of the 20th century (I got this and the link from this post on WordPress).

Before we get into the algorithm, let us once more state the problem that the algorithm is trying to solve. Suppose you are given a probability distribution \pi on some state space X (most often this will be a real euclidian space on which you can do floating point arithmetic). You might want to imagine the state space as describing possible states of a physical system, like spin configurations in a ferromagnetic medium similar to what we looked at in my post on the Ising model. The distribution \pi then describes the probability for the system to be in a specific state. You then have some quantity, given as a function f on the state space. Theoretically, this is a quantity that you can calculate for each individual state. In most applications, however, you will never be able to observe an individual state. Instead, you will observe an average, weighted by the probability of occurence. In other words, you observe the expectation value

\langle f \rangle = \int_X f d\pi

of the quantity f. Thus to make a prediction that can be verified or falsified by an observation, you will have to calculate integrals of this type.

Now, in practice, this can be very hard. One issue is that in order to naively calculate the integral, you would have to transverse the entire state space, which is not feasible for most realistic problems as this tends to be a very high dimensional space. Closely related to this is a second problem. Remember, for instance, that a typical distribution like the Boltzmann distribution is given by

\pi(x) = \frac{1}{Z} e^{-\beta E(x)}

The term in the numerator is comparatively easy to calculate. However, the term in the denominator is the partition function, and is itself an integral over the state space! This makes even the calculation of \pi(x) for a single point in the state space intractable.

But there is hope – even though calculating the values of \pi for one point might be impossible, in a distribution like this, calculating ratios of probabilities is easy, as the partition function cancels out and we are left with the exponential of an energy difference! The Metropolis-Hasting algorithm leverages this and also solves our state space problem by using a Markov chain to approximate the integral. So the idea is to build a Markov chain Xt that converges and has \pi as an invariant distribution, so that we can approximate the integral by

\langle f \rangle = \int_X f d\pi \approx \frac{1}{N} \sum_{t=1}^N f(X_t)

for large values of N.

But how do we construct a Markov chain that converges to a given distribution? The Metropolis Hastings approach to solve this works as follows.

The first thing that we do is to choose a proposal density q on our state space X, i.e. a measurable function

q \colon X \times X \rightarrow [0,\infty)

such that for each x, \int q(x,y) dy = 1.

Then q defines a Markov chain, where the probability to transition into a measurable set A being at a point x is given by the integral

Q(x,A) = \int_{\mathcal X} q(x,y) dy

Of course this is not yet the Markov chain that we want – it has nothing to do with \pi, so there is no reason it should converge to \pi. To fix this, we now adjust the kernel to incorporate the behaviour of \pi. For that purpose, define

\alpha(x,y) =  \begin{cases} \min \{ 1, \frac{\pi(y)q(y,x)}{\pi(x)q(x,y)} \} & \text{if } \pi(x) q(x,y) > 0 \\ 1 & \text{if } \pi(x) q(x,y) = 0  \end{cases}

This number is called the acceptance probability, and, as promised, it only contains ratios of probabilities, so that factors like the partition function cancel and do not have to be computed.

The Metropolis Hastings algorithm now proceeds as follows. We start with some arbitrary point x0. When the chain has arrived at xn, we first draw a candidate y for the next location from the proposal distribution q(x_n, \cdot). We now calculate \alpha according to the formula above. We then accept the proposal with probability \alpha, i.e. we draw a random sample U from a uniform distribution and accept if U \leq \alpha. If the proposal is accepted, we set xn+1 = y, otherwise we set xn+1 = xn, i.e. we stay where we are.

Clearly, the xn are samples from a Markov chain, as the position at step xn only depends on the position at step xn-1. But is still appears to be a bit mysterious why this should work. To shed light on this, let us consider a case where the expressions above simplify a bit. So let us assume that the proposal density q is symmetric, i.e. that

q(x,y) = q(y, x)

This is the original Metropolis algorithm as proposed in [1]. If we also assume that \pi and q are nowhere zero, the acceptance probability simplifies to

\alpha(x,y) =  \min \{ 1, \frac{\pi(y)}{\pi(x)} \}

Thus we accept the proposal if \pi(y) \geq \pi(x) with probability one. This is very similar to a random search for a global maximum – we start at some point x, choose a candidate for a point with higher value of \pi at random and proceed to this point. The major difference is that we also accept candidates with \pi(y) < \pi(x) with a non-zero probability. This allows the algorithm to escape a local maximum much better. Intuitively, the algorithm will still try to spend more time in regions with large values of \pi, as we would expect from an attempt to sample from the distribution \pi.

SymmetricMetropolisHastings

The image above illustrates this procedure. The red graph displays the distribution \pi. If our algorithm is currently at step xn, the purpose is to move “up-hill”, i.e. to the left in our example. If we draw a point like y from q which goes already in the right directory, we will always accept this proposal and move to y. If, however, we draw a point like y’, at which \pi is smaller, we would accept this point with a non-zero probability. Thus if we have reached a local maximum like the one on the right hand side of the diagram, there is still a chance that we can escape from there and move towards the real maximum to the left.

In this form, the algorithm is extremely easy to implement. All we need is a function propose that creates the next proposal, and a function p that calculates the value of the probability density \pi at some point. Then an implementation in Python is as follows.

import numpy as np
chain = []
X = 0
chain.append(X)
for n in range(args.steps):
  Y = propose(X)
  U = np.random.uniform()
  alpha = p(Y) / p(X)
  if (U <= alpha):
    X = Y
  chain.append(X)

In the diagram below, this algorithm has been applied to a Cauchy distribution with mode zero and scale one, using a normal distribution with mean x and standard deviation 0.5 as a proposal for the next location. The chain was calculated for 500.000 steps. The diagram in the upper part shows the values of the chain during the simulation.

Then the first 100.000 steps were discarded and considered as "burn-in" time for the chain to stabilize. Out of the remaining 400.000 sample points, points where chosen with a distance of 500 time steps to obtain a sample which is approximately independent and identically distributed. This is called subsampling and typically not necessary for Monte Carlo integration (see [2], chapter 1 for a short discussion of the need of subsampling), but is done here for the sake of illustration. The resulting subsample is plotted as a histogramm in the lower left corner of the diagram. The yellow line is the actual probability density.

MetropolisCauchySample

We see that after a few thousand steps, the chain converges, but continues to have spikes. However, the sampled distribution is very close to the sample generated by the Python standard method (which is to take the quotient of two independent samples from a standard normal distribution).

In the diagram at the bottom, I have displayed how the integral of two functions (\sin(x) and \cos(x)) approximated using the partial sums develops over time. We see that even though we still have huge spikes, the integral remains comparatively stable and converges already after a few thousand iterations. Even if we run the simulation only for 1000 steps, we already get close to the actual values zero (for \sin(x) for symmetry reasons) and \approx 0.3678 (for \cos(x), obtained using the scipy.integrate.quad integration routine).

In the second diagram in the middle row, I have plotted the autocorrelation versus the lag, as an indicator for the failure of the sample points to be independent. Recall that for two samples X and Y, the Pearson correlation coefficient is the number

\frac{E((X-\bar{X})(Y-\bar{(Y)})}{\sigma_X \sigma_Y}

where \sigma_X and \sigma_Y are the standard deviations of X and Y. In our case, given a lag, i.e. a number l less than the length of the chain, we can form two samples, one consisting of the points X_0, X_2, \dots and the second one consisting of the points of the shifted series X_l, X_{l+1}, X_{l+2}, \dots. The autocorrelation with lag l is then defined to be the correlation coefficient between these two series. In the diagram, we can see how the autocorrelation depends on the lag. We see that for a large lag, the autocorrelation becomes small, supporting our intuition that the series and the shifted series become independent. However, if we execute several simulation runs, we will also find that in some cases, the convergence of the autocorrelation is very slow, so care needs to be taken when trying to obtain a nearly independent sample from the chain.

In practice, the autocorrelation is probably not a good measure for the convergence of a Markov chain. It is important to keep in mind that obtaining an independent sample is not the point of the Markov chain – the real point is that even though the sample is autocorrelated, we can approximate expectation values fairly well. However, I have included the autocorrelation here for the sake of illustration.

This form of proposal distributions is not the only one that is commonly used. Another choice that appears often is called an independence sampler. Here the proposal distribution is chosen to be independent of the current location x of the chain. This gives us an algorithm that resembles the importance sampling method and also shares some of the difficulties associated with it – in my notes on Markov chain Monte Carlo methods, I have included a short discussion and a few examples. These notes also contain further references and a short discussion of why and when the Markov chain underlying a Metropolis-Hastings sampler converges.

Other variants of the algorithm work by updating – in a high-dimensional space – either only one variable at a time or entire blocks of variables that are known to be independent.

Finally, if we are dealing with a state space that can be split as a product X_1 \times X_2, we can use the conditional probability given either x1 or x2 as a proposal distribution. Thus, we first fix x2, and draw a new value for x1 from the conditional probability for x1 given the current value of x2. Then we move to this new coordinate, fix x1, draw from the conditional distribution of x2 given x1 and set the new value of x2 accordingly. It can be shown (see for example [5]) that the acceptance probability is one in this case. So we end up with the Gibbs sampling algorithm that we have already used in the previous post on Ising models.

Monte Carlo sampling methods are a broad field, and even though this has already been a long post, we have only scratched the surface. I invite you to consult some of the references below and / or my notes for more details. As always, you will also find the sample code on GitHub and might want to play with this to reproduce the examples above and see how different settings impact the result.

In a certain sense, this post is the last post in the series on restricted Boltzmann machines, as it provides (at least some of) the mathematical background behind the Gibbs sampling approach that we used there. Boltzmann machines are examples for stochastic neuronal networks that can be applied to unsupervised learning, i.e. the allow a model to learn from a sample distribution without the need for labeled data. In the next few posts on machine learning, I will take a closer look at some other algorithms that can be used for unsupervised learning.

References

1. N. Metropolis,A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, E. Teller, Equation of state calculation by fast computing machines, J. Chem. Phys. Vol. 21, No. 6 (1953), pp. 1087-1092
2. S. Brooks, A. Gelman, C.L. Jones,X.L. Meng (ed.), Handbook of Markov chain Monte Carlo, Chapman Hall / CRC Press, Boca Raton 2011
3. W.K. Hastings, Monte Carlo sampling methods using Markov chains and their applications, Biometrika, Vol. 57 No. 1 (1970), pp. 97-109
R.M. Neal, Probabilistic inference using Markov chain Monte Carlo methods, Technical Report CRG-TR-93-1, Department of Computer Science, University of Toronto, 1993
5. C.P. Robert, G. Casella, Monte Carlo Statistical Methods,
Springer, New York 1999

The difficulty in the bitcoin protocol

Mining is a challenge – a miner has to solve a mathematical puzzle to create a valid block and is rewarded with a certain bitcoin amount (12.5 at the time of writing) for investing electricity and computing power. But technology evolves, and the whole mining process would be pointless if the challenge could not be adapted dynamically to make sure that mining power and the difficulty of the challenge stay in balance. In this post, we will look in a bit more detail into this mechanism.

We have already seen that a block header contains – at character 144 – a four byte long number that is called bits in the source code and which seems to be related to the difficulty. As an example, let us take the block with the hash

0000000000000000000595a828f927ec02763d9d24220e9767fe723b4876b91e

You can get a human readable description of the block at this URL and will see the following output.

BitcoinBlockExample

This output has a field bits that has the value 391129783. If you convert this into a hexadecimal string, you obtain 0x17502ab7, using for instance Pythons hex function. Now this function will give you the big-endian representation of the integer value. To convert to little endian as it can be found in the serialized format of a bitcoin block, you have to reverse the order byte wise and get b72a5017. If you now display the raw format of the block using this link and search for that string, you will actually find it at character 144 at expected.

Now on the screenshot above, you also see the difficulty displayed, which is a floating point number and is 3511060552899.72 in this case. How is that difficulty related to the bits?

As so often, we can find the answer in the bitcoin core source code, starting at the JSON RPC interface. So let us look at the function blockheaderToJSON in rpc/blockchain.cpp that is responsible for transforming a block in the internal representation into a human readable JSON format. Here we find the line

result.push_back(Pair("difficulty", GetDifficulty(blockindex)));

So let us visit the function GetDifficulty that we find in the same file, actually just a few lines up. Here we see that the bits are actually split into two parts. The first part, let me call this the size is just the value of the top 8 bits. The second part, the value of remaining 24 bits, is called the coefficient. Then the difficulty is calculated by dividing the constant 0xFFFF by the coefficient and shifting by 29 – size bits, i.e. according to the formula

\text{difficulty} = 256^{(29 - \text{exponent})} \cdot (65535.0 / \text{coefficient})

In Python, we can therefore use the following two lines to extract the difficulty from a block header in raw format.

nBits = int.from_bytes(bytes.fromhex(raw[144:152]), "little")
difficulty = 256**(29 - (nBits >> 24))*(65535.0 / ((float)(nBits & 0xFFFFFF)))

If you try this out for the transaction above, you will get 3511060552899.7197. Up to rounding after the second digit after the decimal point, this is exactly what we also see on the screenshot of blockchain.info. So far this is quite nice…

However, we are not yet done – this is not the only way to represent the difficulty. The second quantity we need to understand is the target. Recall that the mathematical puzzle that a miner needs to solve is to produce a block with a hash that does not exceed a given number. This number is the target. The lower the target, the smaller the range of 256 bit numbers that do not exceed the target, and the more difficult to solve that puzzle. So a low target implies high difficulty and vice versa.

In fact, there are two target values. One target is part of every block and stored in the bits field – we will see in a minute how this works. The second target is a global target that all nodes in the bitcoin network share – this value is updated independently by each node, but based on the same data (the blockchain) and the same rules so that they arrive at the same conclusion. A block considered valid if solves the mathematical puzzle according to the target which is stored in itself and if, in addition, this target matches the global target.

To understand how this works and the exact relation between target and difficulty, it is again useful to take a look at the source code. The relevant function is the function CheckProofOfWork in pow.cpp which is surprisingly short.

bool CheckProofOfWork(uint256 hash, unsigned int nBits, const Consensus::Params& params)
{
    bool fNegative;
    bool fOverflow;
    arith_uint256 bnTarget;

    bnTarget.SetCompact(nBits, &fNegative, &fOverflow);

    // Check range
    if (fNegative || bnTarget == 0 || fOverflow || bnTarget > UintToArith256(params.powLimit))
        return false;

    // Check proof of work matches claimed amount
    if (UintToArith256(hash) > bnTarget)
        return false;

    return true;
}

Let us try to understand what is going on here. First, a new 256 bit integer bnTarget is instantiated and populated from the bits (that are taken from the block) using the method SetCompact. In addition to a range check, it is then checked that the hash of the block does not exceed the target – this the mathematical puzzle that we need to solve.

The second check – verifying that the target in the block matches the global target – happens in validation.cpp in the function ContextualCheckBlockHeader. At this point, the function GetNextWorkRequired required is called which determines the correct value for the global target. Then this is compared to the bits field in the block and the validation fails if they do not match. So for a block to be accepted as valid, the miner needs to:

  • Actually solve the puzzle, i.e. provide a block with a hash value equal to or smaller than the current global target
  • Include this target as bits in the generated block

This leaves a couple of questions open. First, we still have to understand how the second step – encoding the target in the bits field – works. And then we have to understand how the global target is determined.

Let us start with the first question. We have seen that the target is determined from the bits using the SetCompact method of the class uint256. This function is in fact quite similar to GetDifficulty studied before. Ignoring overflows and signs for a moment, we find that the target is determined from the bits as

\text{target} = \text{coefficient}\cdot 2^{8(\text{exponent}-3)}

If we compare this to the formula for the difficulty that we have obtained earlier, we find that there a simple relation between them:

\text{target} = \text{difficulty}^{-1}\cdot \text{std\_target}

where std_target is the target corresponding to difficulty 1.0 and given by 0xFFFF shifted by 208 bits to the left, i.e. by 0xFFFF with 52 trailing zeros. So up to a constant, target and difficulty are simply inverse to each other.

Now let us turn to the second question – how can the nodes agree on the correct global target? To see this, we need to look at the function GetNextWorkRequired. In most cases, this function will simply return the bits field of the current last block in the blockchain. However, every 2016 blocks, it will adjust the difficulty according to the following rule.

First, it will use the time stamps in the block to calculate how much time has passed since the block which is 2016 blocks back in the chain has been created – this value is called the actual time span. Then, this is compared with the target time span which is two weeks. The new target is then obtained by the formula (ignoring some range checks that are done to avoid too massive jumps)

\text{target}_{new} = \text{target}_{old} \cdot \frac{\text{actual time span}}{\text{target time span}}

Thus if the actual time span is smaller than the target time span, indicating that the mining power is too high and therefore the rate of block creation is too high, the target decreases and consequently the difficulty increases. This should make it more difficult for the miners to create new blocks and the block creation rate should go down again. Similarly, if the difficulty is too high, this mechanism will make sure that it is adjusted downwards. Thus the target rate is 2016 blocks being created in two weeks, i.e. 336 hours, which is a rate of six blocks per hours, i.e. 10 minutes for one block.

Finally, it is worth mentioning that there is lower limit for the difficulty, corresponding to a higher limit for the target. This limit is (as most of the other parameters that we have discussed) contained in chainparams.cpp and specific for the three networks main, net and regtest. For instance, for the main network, this is

consensus.powLimit = uint256S("00000000ffffffffffffffffffffffffffffffffffffffffffffffffffffffff");

whereas the same parameter for the regression testing network is

consensus.powLimit = uint256S("7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff");

This, together with the target of the genesis block (which determines the target for the first 2016 blocks) (bits = 0x207fffff ) determines the initial difficulty in a regression test network. Let us verify this (I knew I would somehow find a reason to add a notebook to this post…)

Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

We are now well equipped to move towards our final project in this series – building a Python based miner. The last thing that we need to understand is how the miner operates and communicates with the bitcoin server, and this will be the topic of the next post.