Big O in probability notation


The order in probability notation is used in probability theory and statistical theory in direct parallel to the big-O notation that is standard in mathematics. Where the big-O notation deals with the convergence of sequences or sets of ordinary numbers, the order in probability notation deals with convergence of sets of random variables, where convergence is in the sense of convergence in probability.

Definitions

Small O: convergence in probability

For a set of random variables Xn and a corresponding set of constants an, the notation
means that the set of values Xn/an converges to zero in probability as n approaches an appropriate limit.
Equivalently, Xn = op can be written as Xn/an = op,
where Xn = op is defined as,
for every positive ε.

Big O: stochastic boundedness

The notation,
means that the set of values Xn/an is stochastically bounded. That is, for any ε > 0, there exists a finite M > 0 and a finite N > 0 such that,

Comparison of the two definitions

The difference between the definition is subtle. If one uses the definition of the limit, one gets:
The difference lies in the δ: for stochastic boundedness, it suffices that there exists one δ to satisfy the inequality, and δ is allowed to be dependent on ε. On the other side, for convergence, the statement has to hold not only for one, but for any δ. In a sense, this means that the sequence must be bounded, with a bound that gets smaller as the sample size increases.
This suggests that if a sequence is op, then it is Op, i.e. convergence in probability implies stochastic boundedness. But the reverse does not hold.

Example

If is a stochastic sequence such that each element has finite variance, then
If, moreover, is a null sequence for a sequence of real numbers, then converges to zero in probability by Chebyshev's inequality, so