The score is the gradient of, the natural logarithm of the likelihood function, with respect to an m-dimensional parameter vector. Thus differentiation yields a row vector, and indicates the sensitivity of the likelihood. In older literature, "linear score" may refer to the score with respect to infinitesimal translation of a given density. This convention arises from a time when the primary parameter of interest was the mean or median of a distribution. In this case, the likelihood of an observation is given by a density of the form. The "linear score" is then defined as
Properties
Mean
While the score is a function of, it also depends on the observations at which the likelihood function is evaluated, and in view of the random character of sampling one may take its expected value over the sample space. Under certain regularity conditions on the density functions of the random variables, the expected value of the score, evaluated at the true parameter value, is zero. To see this rewrite the likelihood function as a probability density function, and denote the sample space. Then: The assumed regularity conditions allow the interchange of derivative and integral, hence the above expression may be rewritten as It is worth restating the above result in words: the expected value of the score is zero. Thus, if one were to repeatedly sample from some distribution, and repeatedly calculate the score, then the mean value of the scores would tend to zero asymptotically.
Variance
The variance of the score, can be derived from the above expression for the expected value. Hence the variance of the score is equal to the negative expected value of the Hessian matrix of the log-likelihood. The latter is known as the Fisher information and is written. Note that the Fisher information is not a function of any particular observation, as the random variable has been averaged out. This concept of information is useful when comparing two methods of observation of some random process.
Consider observing the first n trials of a Bernoulli process, and seeing that A of them are successes and the remaining B are failures, where the probability of success is θ. Then the likelihood is so the score s is We can now verify that the expectation of the score is zero. Noting that the expectation of A is nθ and the expectation of B is n , we can see that the expectation of s is We can also check the variance of. We know that A + B = n and the variance of A is nθ so the variance of s is
Binary outcome model
For models with binary outcomes, the model can be scored with the logarithm of predictions where p is the probability in the model to be estimated and S is the score.
Note that is a function of and the observation, so that, in general, it is not a statistic. However, in certain applications, such as the score test, the score is evaluated at a specific value of , in which case the result is a statistic. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. In 1948, C. R. Rao first proved that the square of the score divided by the information matrix follows an asymptotic χ2-distribution under the null hypothesis. Further note that the likelihood-ratio test is given by which means that the likelihood-ratio test can be understood as the area under the score function between and.