Likelihood function


In statistics, the likelihood function measures the goodness of fit of a statistical model to a sample of data for given values of the unknown parameters. It is formed from the joint probability distribution of the sample, but viewed and used as a function of the parameters only, thus treating the random variables as fixed at the observed values.
The likelihood function describes a hypersurface whose peak, if it exists, represents the combination of model parameter values that maximize the probability of drawing the sample obtained. The procedure for obtaining these arguments of the maximum of the likelihood function is known as maximum likelihood estimation, which for computational convenience is usually done using the natural logarithm of the likelihood, known as the log-likelihood function. Additionally, the shape and curvature of the likelihood surface represent information about the stability of the estimates, which is why the likelihood function is often plotted as part of a statistical analysis.
The case for using likelihood was first made by R. A. Fisher, who believed it to be a self-contained framework for statistical modelling and inference. Later, Barnard and Birnbaum led a school of thought that advocated the likelihood principle, postulating that all relevant information for inference is contained in the likelihood function. But even in frequentist and Bayesian statistics, the likelihood function plays a fundamental role.

Definition

The likelihood function is usually defined differently for discrete and continuous probability distributions. A general definition is also possible, as discussed below.

Discrete probability distribution

Let be a discrete random variable with probability mass function depending on a parameter. Then the function
considered as a function of, is the likelihood function, given the outcome of the random variable. Sometimes the probability of "the value of for the parameter value " is written as or. should not be confused with ; the likelihood is equal to the probability that a particular outcome is observed when the true value of the parameter is, and hence it is equal to a probability density over the outcome, not over the parameter.

Example

Consider a simple statistical model of a coin flip: a single parameter that expresses the "fairness" of the coin. The parameter is the probability that a coin lands heads up when tossed. can take on any value within the range 0.0 to 1.0. For a perfectly fair coin,.
Imagine flipping a fair coin twice, and observing the following data: two heads in two tosses. Assuming that each successive coin flip is i.i.d., then the probability of observing HH is
Hence, given the observed data HH, the likelihood that the model parameter equals 0.5 is 0.25. Mathematically, this is written as
This is not the same as saying that the probability that, given the observation HH, is 0.25.
Suppose that the coin is not a fair coin, but instead it has. Then the probability of getting two heads is
Hence
More generally, for each value of, we can calculate the corresponding likelihood. The result of such calculations is displayed in Figure 1.
In Figure 1, the integral of the likelihood over the interval is 1/3. That illustrates an important aspect of likelihoods: likelihoods do not have to integrate to 1, unlike probabilities.

Continuous probability distribution

Let be a random variable following an absolutely continuous probability distribution with density function depending on a parameter. Then the function
considered as a function of, is the likelihood function. Sometimes the density function for "the value of for the parameter value " is written as. should not be confused with ; the likelihood is equal to the probability density at a particular outcome when the true value of the parameter is, and hence it is equal to a probability density over the outcome, not over the parameter.

In general

In measure-theoretic probability theory, the density function is defined as the Radon–Nikodym derivative of the probability distribution relative to a common dominating measure. The likelihood function is that density interpreted as a function of the parameter, rather than the possible outcomes. This provides a likelihood function for any statistical model with all distributions, whether discrete, absolutely continuous, a mixture or something else.
The discussion above of likelihood with discrete probabilities is a special case of this using the counting measure, which makes the probability of any single outcome equal to the probability density for that outcome.
Given no event, the probability and thus likelihood is 1; any non-trivial event will have a lower likelihood.

Likelihood function of a parameterized model

Among many applications, we consider here one of broad theoretical and practical importance. Given a parameterized family of probability density functions
where is the parameter, the likelihood function is
written
where is the observed outcome of an experiment. In other words, when is viewed as a function of with fixed, it is a probability density function, and when viewed as a function of with fixed, it is a likelihood function.
This is not the same as the probability that those parameters are the right ones, given the observed sample. Attempting to interpret the likelihood of a hypothesis given observed evidence as the probability of the hypothesis is a common error, with potentially disastrous consequences. See prosecutor's fallacy for an example of this.
From a geometric standpoint, if we consider as a function of two variables then the family of probability distributions can be viewed as a family of curves parallel to the -axis, while the family of likelihood functions is the orthogonal curves parallel to the -axis.

Likelihoods for continuous distributions

The use of the probability density in specifying the likelihood function above is justified as follows. Given an observation, the likelihood for the interval, where is a constant, is given by. Observe that
since is positive and constant. Because
where is the probability density function, it follows that
The first fundamental theorem of calculus and the l'Hôpital's rule together provide that
Then
Therefore,
and so maximizing the probability density at amounts to maximizing the likelihood of the specific observation.

Likelihoods for mixed continuous–discrete distributions

The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability masses and a density, where the sum of all the 's added to the integral of is always one. Assuming that it is possible to distinguish an observation corresponding to one of the discrete probability masses from one which corresponds to the density component, the likelihood function for an observation from the continuous component can be dealt with in the manner shown above. For an observation from the discrete component, the likelihood function for an observation from the discrete component is simply
where is the index of the discrete probability mass corresponding to observation, because maximizing the probability mass at amounts to maximizing the likelihood of the specific observation.
The fact that the likelihood function can be defined in a way that includes contributions that are not commensurate arises from the way in which the likelihood function is defined up to a constant of proportionality, where this "constant" can change with the observation, but not with the parameter.

Regularity conditions

In the context of parameter estimation, the likelihood function is usually assumed to obey certain conditions, known as regularity conditions. These conditions are in various proofs involving likelihood functions, and need to be verified in each particular application. For maximum likelihood estimation, the existence of a global maximum of the likelihood function is of the utmost importance. By the extreme value theorem, a continuous likelihood function on a compact parameter space suffices for the existence of a maximum likelihood estimator. While the continuity assumption is usually met, the compactness assumption about the parameter space is often not, as the bounds of the true parameter values are unknown. In that case, concavity of the likelihood function plays a key role.
More specifically, if the likelihood function is twice continuously differentiable on the k-dimensional parameter space assumed to be an open connected subset of, there exists a unique maximum if
Mäkeläinen et al. prove this result using Morse theory while informally appealing to a mountain pass property. Mascarenhas restates their proof using the mountain pass theorem.
In the proofs of consistency and asymptotic normality of the maximum likelihood estimator, additional assumptions are made about the probability densities that form the basis of a particular likelihood function. These conditions were first established by Chanda. In particular, for almost all, and for all,
exist for all in order to ensure the existence of a Taylor expansion. Second, for almost all and for every it must be that
where is such that. This boundedness of the derivatives is needed to allow for differentiation under the integral sign. And lastly, it is assumed that the information matrix,
is positive definite and is finite. This ensures that the score has a finite variance.
The above conditions are sufficient, but not necessary. That is, a model that does not meet these regularity conditions may or may not have a maximum likelihood estimator of the properties mentioned above. Further, in case of non-independently or non-identically distributed observations additional properties may need to be assumed.

Likelihood ratio and relative likelihood

Likelihood ratio

A likelihood ratio is the ratio of any two specified likelihoods, frequently written as:
The likelihood ratio is central to likelihoodist statistics: the law of likelihood states that degree to which data supports one parameter value versus another is measured by the likelihood ratio.
In frequentist inference, the likelihood ratio is the basis for a test statistic, the so-called likelihood-ratio test. By the Neyman–Pearson lemma, this is the most powerful test for comparing two simple hypotheses at a given significance level. Numerous other tests can be viewed as likelihood-ratio tests or approximations thereof. The asymptotic distribution of the log-likelihood ratio, considered as a test statistic, is given by Wilks' theorem.
The likelihood ratio is also of central importance in Bayesian inference, where it is known as the Bayes factor, and is used in Bayes' rule. Stated in terms of odds, Bayes' rule is that the posterior odds of two alternatives, and, given an event, is the prior odds, times the likelihood ratio. As an equation:
The likelihood ratio is not directly used in AIC-based statistics. Instead, what is used is the relative likelihood of models.

Distinction to odds ratio

The likelihood ratio of two models, given the same event, may be contrasted with the odds of two events, given the same model. In terms of a parametrized probability mass function, the likelihood ratio of two values of the parameter and, given an outcome is:
while the odds of two outcomes, and, given a value of the parameter, is:
This highlights the difference between likelihood and odds: in likelihood, one compares models, holding data fixed; while in odds, one compares events, holding the model fixed.
The odds ratio is a ratio of two conditional odds. However, the odds ratio can also be interpreted as a ratio of two likelihoods ratios, if one considers one of the events to be more easily observable than the other. See diagnostic odds ratio, where the result of a diagnostic test is more easily observable than the presence or absence of an underlying medical condition.

Relative likelihood function

Since the actual value of the likelihood function depends on the sample, it is often convenient to work with a standardized measure. Suppose that the maximum likelihood estimate for the parameter is. Relative plausibilities of other values may be found by comparing the likelihoods of those other values with the likelihood of. The relative likelihood of is defined to be
Thus, the relative likelihood is the likelihood ratio with the fixed denominator. This corresponds to standardizing the likelihood to have a maximum of 1.

Likelihood region

A likelihood region is the set of all values of whose relative likelihood is greater than or equal to a given threshold. In terms of percentages, a % likelihood region for is defined to be
If is a single real parameter, a % likelihood region will usually comprise an interval of real values. If the region does comprise an interval, then it is called a likelihood interval.
Likelihood intervals, and more generally likelihood regions, are used for interval estimation within likelihoodist statistics: they are similar to confidence intervals in frequentist statistics and credible intervals in Bayesian statistics. Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms of coverage probability or posterior probability.
Given a model, likelihood intervals can be compared to confidence intervals. If is a single real parameter, then under certain conditions, a 14.65% likelihood interval for will be the same as a 95% confidence interval. In a slightly different formulation suited to the use of log-likelihoods, the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom equal to the difference in df's between the two models.

Likelihoods that eliminate nuisance parameters

In many cases, the likelihood is a function of more than one parameter but interest focuses on the estimation of only one, or at most a few of them, with the others being considered as nuisance parameters. Several alternative approaches have been developed to eliminate such nuisance parameters, so that a likelihood can be written as a function of only the parameter of interest: the main approaches are profile, conditional, and marginal likelihoods. These approaches are also useful when a high-dimensional likelihood surface needs to be reduced to one or two parameters of interest in order to allow a graph.

Profile likelihood

It is possible to reduce the dimensions by concentrating the likelihood function for a subset of parameters by expressing the nuisance parameters as functions of the parameters of interest and replacing them in the likelihood function. In general, for a likelihood function depending on the parameter vector that can be partitioned into, and where a correspondence can be determined explicitly, concentration reduces computational burden of the original maximization problem.
For instance, in a linear regression with normally distributed errors,, the coefficient vector could be partitioned into . Maximizing with respect to yields an optimal value function. Using this result, the maximum likelihood estimator for can then be derived as
where is the projection matrix of. This result is known as the Frisch–Waugh–Lovell theorem.
Since graphically the procedure of concentration is equivalent to slicing the likelihood surface along the ridge of values of the nuisance parameter that maximizes the likelihood function, creating an isometric profile of the likelihood function for a given, the result of this procedure is also known as profile likelihood. In addition to being graphed, the profile likelihood can also be used to compute confidence intervals that often have better small-sample properties than those based on asymptotic standard errors calculated from the full likelihood.

Conditional likelihood

Sometimes it is possible to find a sufficient statistic for the nuisance parameters, and conditioning on this statistic results in a likelihood which does not depend on the nuisance parameters.
One example occurs in 2×2 tables, where conditioning on all four marginal totals leads to a conditional likelihood based on the non-central hypergeometric distribution. This form of conditioning is also the basis for Fisher's exact test.

Marginal likelihood

Sometimes we can remove the nuisance parameters by considering a likelihood based on only part of the information in the data, for example by using the set of ranks rather than the numerical values. Another example occurs in linear mixed models, where considering a likelihood for the residuals only after fitting the fixed effects leads to residual maximum likelihood estimation of the variance components.

Partial likelihood

A partial likelihood is an adaption of the full likelihood such that only a part of the parameters occur in it. It is a key component of the proportional hazards model: using a restriction on the hazard function, the likelihood does not contain the shape of the hazard over time.

Products of likelihoods

The likelihood, given two or more independent events, is the product of the likelihoods of each of the individual events:
This follows from the definition of independence in probability: the probabilities of two independent events happening, given a model, is the product of the probabilities.
This is particularly important when the events are from independent and identically distributed random variables, such as independent observations or sampling with replacement. In such a situation, the likelihood function factors into a product of individual likelihood functions.
The empty product has value 1, which corresponds to the likelihood, given no event, being 1: before any data, the likelihood is always 1. This is similar to a uniform prior in Bayesian statistics, but in likelihoodist statistics this is not an improper prior because likelihoods are not integrated.

Log-likelihood

Log-likelihood function is a logarithmic transformation of the likelihood function, often denoted by a lowercase or, to contrast with the uppercase or for the likelihood. Because logarithms are strictly increasing functions, maximizing the likelihood is equivalent to maximizing the log-likelihood. But for practical purposes it is more convenient to work with the log-likelihood function in maximum likelihood estimation, in particular since most common probability distributions—notably the exponential family—are only logarithmically concave, and concavity of the objective function plays a key role in the maximization.
Given the independence of each event, the overall log-likelihood of intersection equals the sum of the log-likelihoods of the individual events. This is analogous to the fact that the overall log-probability is the sum of the log-probability of the individual events. In addition to the mathematical convenience from this, the adding process of log-likelihood has an intuitive interpretation, as often expressed as "support" from the data. When the parameters are estimated using the log-likelihood for the maximum likelihood estimation, each data point is used by being added to the total log-likelihood. As the data can be viewed as an evidence that support the estimated parameters, this process can be interpreted as "support from independent evidence adds", and the log-likelihood is the "weight of evidence". Interpreting negative log-probability as information content or surprisal, the support of a model, given an event, is the negative of the surprisal of the event, given the model: a model is supported by an event to the extent that the event is unsurprising, given the model.
A logarithm of a likelihood ratio is equal to the difference of the log-likelihoods:
Just as the likelihood, given no event, being 1, the log-likelihood, given no event, is 0, which corresponds to the value of the empty sum: without any data, there is no support for any models.

Likelihood equations

If the log-likelihood function is smooth, its gradient with respect to the parameter, known as the score and written, exists and allows for the application of differential calculus. The basic way to maximize a differentiable function is to find the stationary points ; since the derivative of a sum is just the sum of the derivatives, but the derivative of a product requires the product rule, it is easier to compute the stationary points of the log-likelihood of independent events than for the likelihood of independent events.
The equations defined by the stationary point of the score function serve as estimating equations for the maximum likelihood estimator.
In that sense, the maximum likelihood estimator is implicitly defined by the value at of the inverse function, where is the d-dimensional Euclidean space. Using the inverse function theorem, it can be shown that is well-defined in an open neighborhood about with probability going to one, and is a consistent estimate of. As a consequence there exists a sequence such that asymptotically almost surely, and. A similar result can be established using Rolle's theorem.
The second derivative evaluated at, known as Fisher information, determines the curvature of the likelihood surface, and thus indicates the precision of the estimate.

Exponential families

The log-likelihood is also particularly useful for exponential families of distributions, which include many of the common parametric probability distributions. The probability distribution function for exponential families contain products of factors involving exponentiation. The logarithm of such a function is a sum of products, again easier to differentiate than the original function.
An exponential family is one whose probability density function is of the form :
Each of these terms has an interpretation, but simply switching from probability to likelihood and taking logarithms yields the sum:
The and each correspond to a change of coordinates, so in these coordinates, the log-likelihood of an exponential family is given by the simple formula:
In words, the log-likelihood of an exponential family is inner product of the natural parameter and the sufficient statistic, minus the normalization factor . Thus for example the maximum likelihood estimate can be computed by taking derivatives of the sufficient statistic and the log-partition function.

Example: the gamma distribution

The gamma distribution is an exponential family with two parameters, and. The likelihood function is
Finding the maximum likelihood estimate of for a single observed value looks rather daunting. Its logarithm is much simpler to work with:
To maximize the log-likelihood, we first take the partial derivative with respect to :
If there are a number of independent observations, then the joint log-likelihood will be the sum of individual log-likelihoods, and the derivative of this sum will be a sum of derivatives of each individual log-likelihood:
To complete the maximization procedure for the joint log-likelihood, the equation is set to zero and solved for :
Here denotes the maximum-likelihood estimate, and is the sample mean of the observations.

Background and interpretation

Historical remarks

The term "likelihood" has been in use in English since at least late Middle English. Its formal use to refer to a specific function in mathematical statistics was proposed by Ronald Fisher, in two research papers published in 1921 and 1922. The 1921 paper introduced what is today called a "likelihood interval"; the 1922 paper introduced the term "method of maximum likelihood". Quoting Fisher:
The concept of likelihood should not be confused with probability as mentioned by Sir Ronald Fisher
Fisher's invention of statistical likelihood was in reaction against an earlier form of reasoning called inverse probability. His use of the term "likelihood" fixed the meaning of the term within mathematical statistics.
A. W. F. Edwards established the axiomatic basis for use of the log-likelihood ratio as a measure of relative support for one hypothesis against another. The support function is then the natural logarithm of the likelihood function. Both terms are used in phylogenetics, but were not adopted in a general treatment of the topic of statistical evidence.

Interpretations under different foundations

Among statisticians, there is no consensus about what the foundation of statistics should be. There are four main paradigms that have been proposed for the foundation: frequentism, Bayesianism, likelihoodism, and AIC-based. For each of the proposed foundations, the interpretation of likelihood is different. The four interpretations are described in the subsections below.

Frequentist interpretation

Bayesian interpretation

In Bayesian inference, although one can speak about the likelihood of any proposition or random variable given another random variable: for example the likelihood of a parameter value or of a statistical model, given specified data or other evidence, the likelihood function remains the same entity, with the additional interpretations of a conditional density of the data given the parameter and a measure or amount of information brought by the data about the parameter value or even the model. Due to the introduction of a probability structure on the parameter space or on the collection of models, it is possible that a parameter value or a statistical model have a large likelihood value for given data, and yet have a low probability, or vice versa. This is often the case in medical contexts. Following Bayes' Rule, the likelihood when seen as a conditional density can be multiplied by the prior probability density of the parameter and then normalized, to give a posterior probability density. More generally, the likelihood of an unknown quantity given another unknown quantity is proportional to the probability of given .

Likelihoodist interpretation

In frequentist statistics, the likelihood function is itself a statistic that summarizes a single sample from a population, whose calculated value depends on a choice of several parameters θ1... θp, where p is the count of parameters in some already-selected statistical model. The value of the likelihood serves as a figure of merit for the choice used for the parameters, and the parameter set with maximum likelihood is the best choice, given the data available.
The specific calculation of the likelihood is the probability that the observed sample would be assigned, assuming that the model chosen and the values of the several parameters θ give an accurate approximation of the frequency distribution of the population that the observed sample was drawn from. Heuristically, it makes sense that a good choice of parameters is those which render the sample actually observed the maximum possible post-hoc probability of having happened. Wilks' theorem quantifies the heuristic rule by showing that the difference in the logarithm of the likelihood generated by the estimate’s parameter values and the logarithm of the likelihood generated by population’s "true" parameter values is χ² distributed.
Each independent sample's maximum likelihood estimate is a separate estimate of the "true" parameter set describing the population sampled. Successive estimates from many independent samples will cluster together with the population’s "true" set of parameter values hidden somewhere in their midst. The difference in the logarithms of the maximum likelihood and adjacent parameter sets’ likelihoods may be used to draw a confidence region on a plot whose co-ordinates are the parameters θ1... θp. The region surrounds the maximum-likelihood estimate, and all points within that region differ at most in log-likelihood by some fixed value. The χ² distribution given by Wilks' theorem converts the region's log-likelihood differences into the "confidence" that the population's "true" parameter set lies inside. The art of choosing the fixed log-likelihood difference is to make the confidence acceptably high while keeping the region acceptably small.
As more data are observed, instead of being used to make independent estimates, they can be combined with the previous samples to make a single combined sample, and that large sample may be used for a new maximum likelihood estimate. As the size of the combined sample increases, the size of the likelihood region with the same confidence shrinks. Eventually, either the size of the confidence region is very nearly a single point, or the entire population has been sampled; in both cases, the estimated parameter set is essentially the same as the population parameter set.

AIC-based interpretation

Under the AIC paradigm, likelihood is interpreted within the context of information theory.