RPsychologist logo

Understanding Maximum Likelihood

An Interactive Visualization

Created by Kristoffer Magnusson

The maximum likelihood method is used to fit many models in statistics. In this post I will present some interactive visualizations to try to explain maximum likelihood estimation and some common hypotheses tests (the likelihood ratio test, Wald test, and Score test).

We will use a simple model with only two unknown parameters: the mean and variance. Our primary focus will be on the mean and we'll treat the variance as a nuisance parameter.

Likelihood Calculation

Before we do any calculations, we need some data. So, here's 10 random observations from a normal distribution with unknown mean and variance.

Y = [1.0, 2.0]

Now we need to find what combination of parameter values maximize the likelihood of observing this data. Try moving the sliders around.

Mean (μ)
SD (σ)

We can calculate the joint likelihood by multiplying the densities for all observations. However, often we calculate the log-likelihood instead, which is

(μ,σ2)=inlnfy(yi)=\ell(\mu, \sigma^2) = \sum_i^n \text{ln} \thinspace f_y(y_i)=-34.4+-33.6=-68.1

The combination of parameter values that give the largest log-likelihood is the maximum likelihood estimates (MLEs).

Finding the Maximum Likelihood Estimates

If we repeat the above calculation for a range of parameter values, we get the plots below. (The function could be plotted as a three-dimensional hill as well). We can find the top of each curve by using the partial derivatives with regard to the mean and variance, which is generally called the score function (U). In this case we can solve the score equation analytically (i.e. set it to zero and solve for the mean and variance). We can also solve this equation by brute force simply by moving the sliders around until both partial derivatives are zero (hint: find the MLE for the mean first).




After we've found the MLEs we usually want to make some inferences, so let's focus on three common hypothesis tests. Use the sliders below to change the null hypothesis and the sample size.


Sample Size (n)
Null (μ0)

The score function evaluated at the null is,

U(μ0,σ^02)=μ0(μ0,σ^02)=InfinityU(\mu_0, \hat\sigma_0^2) = \frac{\partial}{\partial \mu_0}\ell(\mu_0, \hat\sigma_0^2) = -Infinity

The observed Fisher information is the negative of the second derivative. This is related to the curvature of the likelihood function -- try increasing the sample size and note that the peak gets narrower around the MLE and that the information increases. The inverse of I is also the variance of the MLE.

I(μ0,σ^02)=2μ02(μ0,σ^02)=InfinityI(\mu_0, \hat\sigma_0^2) = \frac{\partial^2}{\partial \mu_0^2}\ell(\mu_0, \hat\sigma_0^2) = Infinity

Hypothesis Tests

We have the following null and alternative hypothesis,

H0:μ=80versusH1:μ80H_0: \mu = 80 \quad \text{versus} \quad H_1: \mu \ne 80

The likelihood ratio test compares the likelihood ratios of two models. In this example it's the likelihood evaluated at the MLE and at the null. This is illustrated in the plot by the vertical distance between the two horizontal lines. If we multiply the difference in log-likelihood by -2 we get the statistic,

LR=2[(μ0,σ^02)[(μ^,σ^2)]=NaN\begin{aligned} \text{LR} &= -2[\ell(\mu_{0}, \hat\sigma^2_{0}) - [\ell(\hat\mu, \hat\sigma^2)]\\ &= NaN \end{aligned}

Asymptotically LR follow aχ2\chi^2 distribution with 1 degrees of freedom, which gives p = NaN.

Note: The figure is simplified and do not account for the fact that each likelihood is based on different variance estimates.


This page is still under construction, formulas will be added later.

Cite this page according to your favorite style guide. The page is created by Kristoffer Magnusson, and you can find the current version number and the date of the last update in the footer.

Please report errors or suggestions by opening an issue on GitHub.

No, it will be fine. The app runs in your browser so the server only needs to serve the files.

Yes, go ahead! The design of the visualizations on this page is dedicated to the public domain, which means "you can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission" (see Creative common's CC0-license). Although, attribution is not required it is always appreciated!.


There are many ways to contribute to free and open software. If you like my work and want to support it you can:

Finacial support is not the only way to contribute. Other ways to contribute is to suggest new features, contribute useful references, or help fix typos. Just open a issues on GitHub.


I've created some posters inspired by my interactive visualizations. You can find them on my Etsy shop.

Stats posters