The EM algorithm and Gaussian mixture models – part II

In this post, I will discuss the general form of the EM algorithm to obtain a maximum likelihood estimator for a model with latent variables. First, let us describe our model. We suppose that we are given some joint distribution of a random variable X (the observed variables) and and random variable Z (the latent … Continue reading The EM algorithm and Gaussian mixture models – part II

The EM algorithm and Gaussian mixture models – part I

In the last few posts on machine learning, we have looked in detail at restricted Boltzmann machines. RBMs are a prime example for unsupervised learning - they learn a given distribution and are able to extract features from a data set, without the need to label the data upfront. However, there are of course many … Continue reading The EM algorithm and Gaussian mixture models – part I

Why you need statistics to understand neuronal networks

When I tried to learn about neuronal networks first, I did what probably most of us would do - I started to look for tutorials, blogs etc. on the web and was surprised by the vast amount of resources that I found. Almost every blog or webpage about neuronal networks has a section on training … Continue reading Why you need statistics to understand neuronal networks

The Metropolis-Hastings algorithm

In this post, we will investigate the Metropolis-Hastings algorithm, which is still one of the most popular algorithms in the field of Markov chain Monte Carlo methods, even though its first appearence (see [1]) happened in 1953, more than 60 years in the past. It does for instance appear on the CiSe top ten list … Continue reading The Metropolis-Hastings algorithm

Recurrent and ergodic Markov chains

Today, we will look in more detail into convergence of Markov chains - what does it actually mean and how can we tell, given the transition matrix of a Markov chain on a finite state space, whether it actually converges. So suppose that we are given a Markov chain on a finite state space, with … Continue reading Recurrent and ergodic Markov chains