What is the EM Algorithm in Machine Learning? [Explained with Examples]

The EM algorithm or Expectation-Maximization algorithm is a latent variable model that was proposed by Arthur Dempster, Nan Laird, and Donald Rubin in 1977.

A latent variable model comprises observable variables and unobservable variables. Observed variables are those that can be measured whereas unobserved (latent/hidden) variables are inferred from observed variables. 

As explained by the trio, the EM algorithm can be used to determine the local maximum likelihood (MLE) parameters or maximum a posteriori (MAP) parameters for latent variables (unobservable variables that need to be inferred from observable variables) in a statistical model. It is used to predict these values or determine data that is missing or incomplete, provided that you know the general form of probability distribution associated with these latent variables.

To put it simply, the general principle behind the EM algorithm in machine learning involves using observable instances of latent variables to predict values in instances that are unobservable for learning. This is done until convergence of the values occurs.

The algorithm is a rather powerful tool in machine learning and is a combination of many unsupervised algorithms. This includes the k-means clustering algorithm, among other EM algorithm variants. 

The Expectation-Maximization Algorithm

Let’s explore the mechanism of the Expectation-Maximization algorithm in Machine Learning:

Source

  • Step 1: We have a set of missing or incomplete data and another set of starting parameters. We assume that observed data or the initial values of the parameters are generated from a specific model.
  • Step 2: Based on the observable value in the observable instances of the available data, we will predict or estimate the values in the unobservable instances of the data or the missing data. This is known as the Expectation step (E – step).
  • Step 3: Using the data generated from the E – step, we will update the parameters and complete the data set. This is known as the Maximization step (M – step) which is used to update the hypothesis.

Steps 2 and step 3 are repeated until convergence. Meaning if the values are not converging, we will repeat the E – step and M – step.

.

Source

Advantages and Disadvantages of the EM Algorithm

Disadvantages of EM Algorithm
1 Every iteration in the EM algorithm results in a guaranteed increase in likelihood.
2 The Expectation step and Maximization step is rather easy and the solution for the latter mostly exists in closed form.
Advantages of the EM Algorithm
1 The expectation-Maximization algorithm takes both forward and backward probabilities into account. This is in contrast with numerical optimization which takes only the forward probabilities into account.
2 EM algorithm convergence is very slow and is only made to the local optima.

Applications of the EM Algorithm 

The latent variable model has plenty of real-world applications in machine learning.

  1. It is used in unsupervised data clustering and psychometric analysis.
  2. It is also used to compute the Gaussian density of a function.
  3. The EM algorithm finds extensive use in predicting the Hidden Markov Model (HMM) parameters and other mixed models.
  4. EM algorithm finds plenty of use in natural language processing (NLP), computer vision, and quantitative genetics.
  5. Other important applications of the EM algorithm include image reconstruction in the field of medicine and structural engineering. 

Let us understand the EM algorithm using a Gaussian Mixture Model.

EM Algorithm For Gaussian Mixture Model

To estimate the parameters of a Gaussian Mixture Model, we will need some observed variables generated by two separate processes whose probability distributions are known. However, the data points of the two processes are combined and we do not know which distribution they belong to. 

We aim to estimate the parameters of these distributions using the Maximum Likelihood estimation of the EM algorithm as explained above. 

Here is the code we will use: 

# Given a function for which we have to compute density of 

# Gaussian at point x_i given mu, sigma: G(x_i, mu, sigma); and

# another function to compute the log-likelihoods: L(x, mu, sigma, pi)

def estimate_gmm(x, K, tol=0.001, max_iter=100):

    ”’ Estimate GMM parameters.

        :param x: list of observed real-valued variables

        :param K: integer for number of Gaussian

        :param tol: tolerated change for log-likelihood

        :return: mu, sigma, pi parameters

    ”’

    # 0. Initialize theta = (mu, sigma, pi)

    N = len(x)

    mu, sigma = [rand()] * K, [rand()] * K

    pi = [rand()] * K

    curr_L = np.inf

    for j in range(max_iter):

        prev_L = curr_L

        # 1. E-step: responsibility = p(z_i = k | x_i, theta^(t-1))

        r = {}

        for i in range(N):

            parts = [pi[k] * G(x_i, mu[k], sigma[k]) for i in range(K)]

            total = sum(parts)

            for i in k:

                r[(i, k)] = parts[k] / total

        # 2. M-step: Update mu, sigma, pi values

        rk = [sum([r[(i, k)] for i in range(N)]) for k in range(K)]

        for k in range(K):

            pi[k] = rk[k] / N

            mu[k] = sum(r[(i, k)] * x[i] for i in range(N)) / rk[k]

            sigma[k] = sum(r[(i, k)] * (x[i] – mu[k]) ** 2) / rk[k]

        # 3. Check exit condition

        curr_L = L(x, mu, sigma, pi)

        if abs(prev_L – curr_L) < tol:

            break

    return mu, sigma, pi

In the E-Step, we can use the Bayes theorem to determine the expected values of the given data points that are drawn from the past iterations of the algorithm. In the M-Step, we assume that the values of the latent variables are fixed to estimate the proxies in the unobserved instances using the Maximum Likelihood. Finally, we use the standard mean and standard deviation formulas to estimate the parameters of the gaussian mixture model.

Conclusion

This brings us to the end of the article. For more information on Machine Learning concepts, get in touch with the top faculty of IIIT Bangalore and Liverpool John Moores University through upGrad‘s Master of Science in Machine Learning & AI program. 

It is an 18 months course that offers 450+ hours of learning content, 12+ industry projects, 10 Capstone project options, and 10+ coding assignments. You also enjoy personalised mentorship from industry experts, and career guidance counselling through live sessions. The next batch begins on Feb 28, 2021!

Lead the AI Driven Technological Revolution

PG DIPLOMA IN MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE
APPLY NOW

Leave a comment

Your email address will not be published.

Accelerate Your Career with upGrad

Our Popular Machine Learning Course

×