In the last few posts on machine learning, we have looked in detail at restricted Boltzmann machines. RBMs are a prime example for unsupervised learning - they learn a given distribution and are able to extract features from a data set, without the need to label the data upfront. However, there are of course many … Continue reading The EM algorithm and Gaussian mixture models – part I
Tag: AI
Why you need statistics to understand neuronal networks
When I tried to learn about neuronal networks first, I did what probably most of us would do - I started to look for tutorials, blogs etc. on the web and was surprised by the vast amount of resources that I found. Almost every blog or webpage about neuronal networks has a section on training … Continue reading Why you need statistics to understand neuronal networks
Training a restricted Boltzmann machine on a GPU with TensorFlow
During the second half of the last decade, researchers have started to exploit the impressive capabilities of graphical processing units (GPUs) to speed up the execution of various machine learning algorithms (see for instance [1] and [2] and the references therein). Compared to a standard CPU, modern GPUs offer a breathtaking degree of parallelization - … Continue reading Training a restricted Boltzmann machine on a GPU with TensorFlow
Training restricted Boltzmann machines with persistent contrastive divergence
In the last post, we have looked at the contrastive divergence algorithm to train a restricted Boltzmann machine. Even though this algorithm continues to be very popular, it is by far not the only available algorithm. In this post, we will look at a different algorithm known as persistent contrastive divergence and apply it to … Continue reading Training restricted Boltzmann machines with persistent contrastive divergence
Learning algorithms for restricted Boltzmann machines – contrastive divergence
In the previous post on RBMs, we have derived the following gradient descent update rule for the weights. $latex \Delta W_{ij} = \beta \left[ \langle v_i \sigma(\beta a_j) \rangle_{\mathcal D} - \langle v_i \sigma(\beta a_j) \rangle_{P(v)} \right] &s=1 $ In this post, we will see how this update rule can be efficiently implemented. The first thing … Continue reading Learning algorithms for restricted Boltzmann machines – contrastive divergence
Restricted Boltzmann machines
In the previous post, we have seen that a Boltzmann machine as studied so far suffers from two deficiencies. First, training is very slow as we have to run a Gibbs sampler until convergence for every iteration of the gradient descent algorithm. Second, we can only see the second moments of the data distribution and … Continue reading Restricted Boltzmann machines
Hopfield networks: practice
After having discussed Hopfield networks from a more theoretical point of view, let us now see how we can implement a Hopfield network in Python. First let us take a look at the data structures. We will store the weights and the state of the units in a class HopfieldNetwork. The weights are stored in … Continue reading Hopfield networks: practice
Hopfield networks: theory
Having looked in some detail at the Ising model, we are now well equipped to tackle a class of neuronal networks that has been studied by several authors in the sixties, seventies and early eighties of the last century, but has become popular by an article [1] published by J. Hopfield in 1982. The idea … Continue reading Hopfield networks: theory
Boltzmann machines, spin, Markov chains and all that
The image above displays a set of handwritten digits on the left. They look a bit like being sketched on paper by someone in a hurry and then scanned and digitalized, not very accurate but still mostly readable - but they are artificial, produced by a neuronal network, more precisely a so called restricted Boltzmann … Continue reading Boltzmann machines, spin, Markov chains and all that