Law of the unconscious statistician


In probability theory and statistics, the law of the unconscious statistician is a theorem used to calculate the expected value of a function g of a random variable X when one knows the probability distribution of X but one does not know the distribution of g. The form of the law can depend on the form in which one states the probability distribution of the random variable X. If it is a discrete distribution and one knows its probability mass function ƒX, then the expected value of g is
where the sum is over all possible values x of X. If it is a continuous distribution and one knows its probability density function ƒX, then the expected value of g is
If one knows the cumulative probability distribution function FX, then the expected value of g is given by a Riemann-Stieltjes integral
.

Etymology

This proposition is known as the law of the unconscious statistician because students have been accused of using the identity without realizing that it must be treated as the result of a rigorously proved theorem, not merely a definition.

Joint distributions

A similar property holds for joint distributions. For discrete random variables X and Y, a function of two variables g, and joint probability mass function f:
In the continuous case, with f being the joint probability density function,

Proof

This law is not a trivial result of definitions as it might at first appear, but rather must be proved.

Continuous case

For a continuous random variable X, let Y = g, and suppose that g is differentiable and that its inverse g−1 is monotonic. By the formula for inverse functions and differentiation,
Because x = g−1,
So that by a change of variables,
Now, notice that because the cumulative distribution function, substituting in the value of g, taking the inverse of both sides, and rearranging yields. Then, by the chain rule,
Combining these expressions, we find
By the definition of expected value,

Discrete case

Let. Then begin with the definition of expected value.

From measure theory

A technically complete derivation of the result is available using arguments in measure theory, in which the probability space of a transformed random variable g is related to that of the original random variable X. The steps here involve defining a pushforward measure for the transformed space, and the result is then an example of a.
We say has a density if is absolutely continuous with respect to the Lebesgue measure. In that case
where is the density. So the above can be rewritten as the more familiar