Bennett's inequality


In probability theory, Bennett's inequality provides an upper bound on the probability that the sum of independent random variables deviates from its expected value by more than any specified amount. Bennett's inequality was proved by George Bennett of the University of New South Wales in 1962.

Statement

Let
be independent random variables with finite variance and assume they all have zero expected value. Further assume almost surely for all, and define and
Then for any,
where.

Generalizations and comparisons to other bounds

For generalizations see Freedman and Fan, Grama and Liu for a martingale version of Bennett's inequality and its improvement, respectively.
Hoeffding's inequality only assumes the summands are bounded almost surely, while Bennett's inequality offers some improvement when the variances of the summands are small compared to their almost sure bounds. However Hoeffding's inequality entails sub-Gaussian tails, whereas in general Bennett's inequality has Poissonian tails.
In both inequalities, unlike some other inequalities or limit theorems, there is no requirement that the component variables have identical or similar distributions.