Understanding Maximum Likelihood
An Interactive Visualization
Created by Kristoffer Magnusson
The maximum likelihood method is used to fit many models in statistics. In this post I will present some interactive visualizations to try to explain maximum likelihood estimation and some common hypotheses tests (the likelihood ratio test, Wald test, and Score test).
We will use a simple model with only two unknown parameters: the mean and variance. Our primary focus will be on the mean and we'll treat the variance as a nuisance parameter.
Before we do any calculations, we need some data. So, here's 10 random observations from a normal distribution with unknown mean (μ) and variance (σ²).
Y = [1.0, 2.0]
We also need to assume a model, we're gonna go with the model that we know generated this data: . The challenge now is to find what combination of values for μ and σ² maximize the likelihood of observing this data (given our assumed model). Try moving the sliders around to see what happens.
We can calculate the joint likelihood by multiplying the densities for all observations. However, often we calculate the log-likelihood instead, which is
The combination of parameter values that give the largest log-likelihood is the maximum likelihood estimates (MLEs).
Finding the Maximum Likelihood Estimates
Since we use a very simple model, there's a couple of ways to find the MLEs. If we repeat the above calculation for a wide range of parameter values, we get the plots below. The joint MLEs can be found at the top of contour plot, which shows the likelihood function for a grid of parameter values. We can also find the MLEs analytically by using some calculus. We find the top of the hill by using the partial derivatives with regard to μ and σ² - which is generally called the score function (U). Solving the score equations mean that we find which combination of μ and σ² leads to both partial derivates being zero.
For more challenging models, we often need to use some optimization algorithm. Basically, we let the computer iteratively climb towards the top of the hill. You can use the controls below to see how a gradient ascent or Newton-Raphson algorithm finds its way to the maximum likelihood estimate.
After we've found the MLEs we usually want to make some inferences, so let's focus on three common hypothesis tests. Use the sliders below to change the null hypothesis and the sample size.
The score function evaluated at the null is,
The observed Fisher information is the negative of the second derivative. This is related to the curvature of the likelihood function -- try increasing the sample size and note that the peak gets narrower around the MLE and that the information increases. The inverse of I is also the variance of the MLE.
We have the following null and alternative hypothesis,
The likelihood ratio test compares the likelihood ratios of two models. In this example it's the likelihood evaluated at the MLE and at the null. This is illustrated in the plot by the vertical distance between the two horizontal lines. If we multiply the difference in log-likelihood by -2 we get the statistic,
Asymptotically LR follows a distribution with 1 degrees of freedom, which gives p = NaN.
Note: The figure is simplified and does not account for the fact that each likelihood is based on different variance estimates.
What are the formulas?
This page is still under construction, formulas will be added later. Pull requests are welcome!
How do I cite this page?
Cite this page according to your favorite style guide. The references below are automatically generated and contain the correct information.
Magnusson, K. (2020). Understanding Maximum Likelihood: An interactive visualization (Version 0.1.2) [Web App]. R Psychologist. https://rpsychologist.com/likelihood/
I found a bug/error/typo or want to make a suggestion!
I'm gonna ask a large number of students to visit this site. Will it crash your server?
No, it will be fine. The app runs in your browser so the server only needs to serve the files.
Can I include this visualization in my book/article/etc?
Yes, go ahead! The design of the visualizations on this page is dedicated to the public domain, which means “you can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission” (see Creative common’s CC0-license). Although, attribution is not required it is always appreciated!.
There are many ways to contribute to free and open software. If you like my work and want to support it you can:
Pull requests are also welcome, or you can contribute by suggesting new features, add useful references, or help fix typos. Just open a issues on GitHub.
Statistical Power and Significance Testing
An interactive version of the traditional Type I and II error illustration.
Equivalence and Non-Inferiority Testing
Explore how superiority, non-inferiority, and equivalence testing relates to a confidence interval
Explore the expected distribution of p-values under varying alternative hypothesises.