The EM algorithm and Gaussian mixture models – part I

In the last few posts on machine learning, we have looked in detail at restricted Boltzmann machines. RBMs are a prime example for unsupervised learning - they learn a given distribution and are able to extract features from a data set, without the need to label the data upfront. However, there are of course many … Continue reading The EM algorithm and Gaussian mixture models – part I

Why you need statistics to understand neuronal networks

When I tried to learn about neuronal networks first, I did what probably most of us would do - I started to look for tutorials, blogs etc. on the web and was surprised by the vast amount of resources that I found. Almost every blog or webpage about neuronal networks has a section on training … Continue reading Why you need statistics to understand neuronal networks

Training a restricted Boltzmann machine on a GPU with TensorFlow

During the second half of the last decade, researchers have started to exploit the impressive capabilities of graphical processing units (GPUs) to speed up the execution of various machine learning algorithms (see for instance [1] and [2] and the references therein). Compared to a standard CPU, modern GPUs offer a breathtaking degree of parallelization - … Continue reading Training a restricted Boltzmann machine on a GPU with TensorFlow

Training restricted Boltzmann machines with persistent contrastive divergence

In the last post, we have looked at the contrastive divergence algorithm to train a restricted Boltzmann machine. Even though this algorithm continues to be very popular, it is by far not the only available algorithm. In this post, we will look at a different algorithm known as persistent contrastive divergence and apply it to … Continue reading Training restricted Boltzmann machines with persistent contrastive divergence

Learning algorithms for restricted Boltzmann machines – contrastive divergence

In the previous post on RBMs, we have derived the following gradient descent update rule for the weights. $latex \Delta W_{ij} = \beta \left[ \langle v_i \sigma(\beta a_j) \rangle_{\mathcal D} - \langle v_i \sigma(\beta a_j) \rangle_{P(v)} \right] &s=1 $ In this post, we will see how this update rule can be efficiently implemented. The first thing … Continue reading Learning algorithms for restricted Boltzmann machines – contrastive divergence

Boltzmann machines, spin, Markov chains and all that

The image above displays a set of handwritten digits on the left. They look a bit like being sketched on paper by someone in a hurry and then scanned and digitalized, not very accurate but still mostly readable - but they are artificial, produced by a neuronal network, more precisely a so called restricted Boltzmann … Continue reading Boltzmann machines, spin, Markov chains and all that