Understanding Maximum Likelihood

An Interactive Visualization

Created by Kristoffer Magnusson

Share:

The maximum likelihood method is used to fit many models in statistics. In this post I will present some interactive visualizations to try to explain maximum likelihood estimation and some common hypotheses tests (the likelihood ratio test, Wald test, and Score test).

We will use a simple model with only two unknown parameters: the mean and variance. Our primary focus will be on the mean and we'll treat the variance as a nuisance parameter.

Likelihood Calculation

Before we do any calculations, we need some data. So, here's 10 random observations from a normal distribution with unknown mean (μ) and variance (σ²).

Y = [1.0, 2.0]

We also need to assume a model, we're gonna go with the model that we know generated this data: yN(μ,σ2)y \sim \mathcal N(\mu, \sigma^2). The challenge now is to find what combination of values for μ and σ² maximize the likelihood of observing this data (given our assumed model). Try moving the sliders around to see what happens.

Mean (μ)
Variance (σ²)

We can calculate the joint likelihood by multiplying the densities for all observations. However, often we calculate the log-likelihood instead, which is

(μ,σ2)=inlnfy(yi)=\ell(\mu, \sigma^2) = \sum_i^n \text{ln} \thinspace f_y(y_i)=-34.4+-33.6=-68.1

The combination of parameter values that give the largest log-likelihood is the maximum likelihood estimates (MLEs).

Finding the Maximum Likelihood Estimates

Since we use a very simple model, there's a couple of ways to find the MLEs. If we repeat the above calculation for a wide range of parameter values, we get the plots below. The joint MLEs can be found at the top of contour plot, which shows the likelihood function for a grid of parameter values. We can also find the MLEs analytically by using some calculus. We find the top of the hill by using the partial derivatives with regard to μ and σ² - which is generally called the score function (U). Solving the score equations mean that we find which combination of μ and σ² leads to both partial derivates being zero.

Mean

For more challenging models, we often need to use some optimization algorithm. Basically, we let the computer iteratively climb towards the top of the hill. You can use the controls below to see how a gradient ascent or Newton-Raphson algorithm finds its way to the maximum likelihood estimate.

Gradient ascent

Iterations: 0

Variance

Tip: You can move the values around by dragging them.

Inference

After we've found the MLEs we usually want to make some inferences, so let's focus on three common hypothesis tests. Use the sliders below to change the null hypothesis and the sample size.

Illustration

Sample Size (n)
Null (μ0)

The score function evaluated at the null is,

U(μ0,σ^02)=μ0(μ0,σ^02)=InfinityU(\mu_0, \hat\sigma_0^2) = \frac{\partial}{\partial \mu_0}\ell(\mu_0, \hat\sigma_0^2) = -Infinity

The observed Fisher information is the negative of the second derivative. This is related to the curvature of the likelihood function -- try increasing the sample size and note that the peak gets narrower around the MLE and that the information increases. The inverse of I is also the variance of the MLE.

I(μ0,σ^02)=2μ02(μ0,σ^02)=InfinityI(\mu_0, \hat\sigma_0^2) = \frac{\partial^2}{\partial \mu_0^2}\ell(\mu_0, \hat\sigma_0^2) = Infinity

Hypothesis Tests

We have the following null and alternative hypothesis,

H0:μ=80versusH1:μ80H_0: \mu = 80 \quad \text{versus} \quad H_1: \mu \ne 80

The likelihood ratio test compares the likelihood ratios of two models. In this example it's the likelihood evaluated at the MLE and at the null. This is illustrated in the plot by the vertical distance between the two horizontal lines. If we multiply the difference in log-likelihood by -2 we get the statistic,

LR=2[(μ0,σ^02)[(μ^,σ^2)]=NaN\begin{aligned} \text{LR} &= -2[\ell(\mu_{0}, \hat\sigma^2_{0}) - [\ell(\hat\mu, \hat\sigma^2)]\\ &= NaN \end{aligned}

Asymptotically LR follows a χ2\chi^2 distribution with 1 degrees of freedom, which gives p = NaN.

Note: The figure is simplified and does not account for the fact that each likelihood is based on different variance estimates.

Written by Kristoffer Magnusson a researcher in clinical psychology. You should follow him on Twitter and come hang out on the open science discord Git Gud Science.

FAQ

This page is still under construction, formulas will be added later. Pull requests are welcome!

Cite this page according to your favorite style guide. The references below are automatically generated and contain the correct information.

APA 7

Magnusson, K. (2020). Understanding Maximum Likelihood: An interactive visualization (Version 0.1.2) [Web App]. R Psychologist. https://rpsychologist.com/likelihood/

BibTex

Please report errors or suggestions by opening an issue on GitHub.

No, it will be fine. The app runs in your browser so the server only needs to serve the files.

Yes, go ahead! The design of the visualizations on this page is dedicated to the public domain, which means “you can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission” (see Creative common’s CC0-license). Although, attribution is not required it is always appreciated!.

Contribute/Donate

There are many ways to contribute to free and open software. If you like my work and want to support it you can:

Pull requests are also welcome, or you can contribute by suggesting new features, add useful references, or help fix typos. Just open a issues on GitHub.

Sponsors

You can sponsor my open source work using GitHub Sponsors and have your name shown here.

Backers ✨

Your Name
Your Name

More Visualizations

Maximum Likelihood

An interactive post covering various aspects of maximum likelihood estimation.

Cohen's d

An interactive app to visualize and understand standardized effect sizes.

Statistical Power and Significance Testing

An interactive version of the traditional Type I and II error illustration.

Confidence Intervals

An interactive simulation of confidence intervals

Bayesian Inference

An interactive illustration of prior, likelihood, and posterior.

Correlations

Interactive scatterplot that let's you visualize correlations of various magnitudes.

Equivalence and Non-Inferiority Testing

Explore how superiority, non-inferiority, and equivalence testing relates to a confidence interval

P-value distribution

Explore the expected distribution of p-values under varying alternative hypothesises.

t-distribution

Interactively compare the t- and normal distribution.