Eaton's inequality


In probability theory, Eaton's inequality is a bound on the largest values of a linear combination of bounded random variables. This inequality was described in 1974 by Morris L. Eaton.

Statement of the inequality

Let be a set of real independent random variables, each with an expected value of zero and bounded above by 1. The variates do not have to be identically or symmetrically distributed. Let be a set of n fixed real numbers with
Eaton showed that
where φ is the probability density function of the standard normal distribution.
A related bound is Edelman's
where Φ is cumulative distribution function of the standard normal distribution.
Pinelis has shown that Eaton's bound can be sharpened:
A set of critical values for Eaton's bound have been determined.

Related inequalities

Let be a set of independent Rademacher random variablesP = P = 1/2. Let Z be a normally distributed variate with a mean 0 and variance of 1. Let be a set of n fixed real numbers such that
This last condition is required by the Riesz–Fischer theorem which states that
will converge if and only if
is finite.
Then
for f = | x |p. The case for p ≥ 3 was proved by Whittle and p ≥ 2 was proved by Haagerup.
If f = eλx with λ ≥ 0 then
where inf is the infimum.
Let
Then
The constant in the last inequality is approximately 4.4634.
An alternative bound is also known:
This last bound is related to the Hoeffding's inequality.
In the uniform case where all the bi = n−1/2 the maximum value of Sn is n1/2. In this case van Zuijlen has shown that
where μ is the mean and σ is the standard deviation of the sum.