Kolmogorov's inequality


In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. The inequality is named after the Russian mathematician Andrey Kolmogorov.

Statement of the inequality

Let X1,..., Xn : Ω → R be independent random variables defined on a common probability space, with expected value E = 0 and variance Var < +∞ for k = 1,..., n. Then, for each λ > 0,
where Sk = X1 + ... + Xk.
The convenience of this result is that we can bound the worst case deviation of a random walk at any point of time using its value at the end of time interval.

Proof

The following argument is due to Kareem Amin and employs discrete martingales.
As argued in the discussion of Doob's martingale inequality, the sequence is a martingale.
Define as follows. Let, and
for all.
Then is also a martingale.
For any martingale with, we have that
Applying this result to the martingale, we have
where the first inequality follows by Chebyshev's inequality.
This inequality was generalized by Hájek and Rényi in 1955.