Statistical stability


The phenomenon of statistical stability, one of the most surprising physical phenomena, is the weakness of the dependence of statistics on the sample size, if this size is large. This effect is typical, for example, for relative frequencies of mass events and averages. This phenomenon is widespread and so can be regarded as a fundamental natural phenomenon.
The physical nature of the statistical stability phenomenon is revealed by observing the mass events.
Currently two theories are known that describe this phenomenon. They are the classical probability theory, which has a long history of development, and the theory of hyper-random phenomena, created in recent decades.

History

The first to draw attention to the phenomenon of statistical stability was the cloth merchant J. Graunt in 1662. Information about research on statistical stability is fragmentary for the period from the end of the seventeenth century to the end of the nineteenth century, e.g., by Jacob Bernoulli, Simeon Denis Poisson, Irenee-Jules Bienayme, Antoine Augustin Cournot, Adolphe Quetelet, John Venn, etc.
Systematic study of statistical stability began at the end of the nineteenth century. In 1879, the German statistician Wilhelm Lexis made the first attempt to link the concept of statistical stability of the relative frequency with the dispersion. At the turn of the century and in the early twentieth century, statistical stability was studied by Karl Pearson, Alexander Alexandrovich Chuprov, Ladislaus Bortkiewicz, Andrey Markov, Richard von Mises, and others.
A new stage of experimental research began in the late twentieth century. Additional studies became necessary due to the new applied tasks and the detection of a number of phenomena that cannot be satisfactorily explained and described within the framework of the classical probability theory. The new tasks are, in particular, the ultra-precise measurement of physical quantities and ultra-precise forecasting of developments over large intervals of observation. The relatively new phenomena are, for instance, an unpredictable measurement progressive error, as well as a flicker noise, which is detected everywhere and cannot be suppressed by averaging the data.

Statistical stability of the relative frequencies of events

Many well–known scientists led experimental investigations of the statistical stability phenomenon. It is known, for example, that coin-tossing experiments were studied by P.S. de Laplace, Georges-Louis Leclerc, Comte de Buffon, Karl Pearson, Nobel Prize Laureate Richard Feynman, Augustus de Morgan, William Stanley Jevons, Vsevolod Ivanovich Romanovsky, William Feller, and others. The trivial–at–first–glance task did not seem trivial for them.
Table 1 presents some of the results of their experiments.
Table 2 shows the results described in of ten runs of the same experiment in which each run consists of 1,000 tosses. The tables demonstrate that, for a large number of tosses, the relative frequency of heads or tails is close to 0.5.
Experimental studies of other real physical events show that for a high number of experiments the relative frequencies of events are stabilized; that shows the fundamental nature of the phenomenon of statistical stability.

Stability of statistics

The phenomenon of statistical stability is manifested not only in the stability of the relative frequency of mass events, but also in the stability of the average of the process, or its sample mean. The phenomenon of statistical stability is manifested in the case of averaging of fluctuations that are of different types, in particular, of the stochastic, determinate, and actual physical processes.
Example 1. In Fig. 1a and Fig. 1c the realization of noise with a uniform power spectral density and a determinate period process are presented. In Fig. 1b and Fig. 1d the dependencies of the averages on the averaging interval are shown. As can be seen from Fig. 1b and Fig. 1d, when the averaging interval increases, fluctuations in the sample mean decrease and the average value gradually stabilizes.
Example 2. Fig. 2a and Fig. 2b show how the mains voltage in a city fluctuates quickly, while the average changes slowly. As the averaging interval increases from zero to one hour, the average voltage stabilizes.
The phenomenon of statistical stability is observed in the calculation also other statistics, in particular, sample moments.

Properties of statistical stability

Emergence

The statistical stability of the relative frequency is a property of mass events. This property is not inherent in a single event, but is inherent in their collection. Similarly, the statistical stability of statistics is a property inherent to the set of samples. Therefore, the statistical stability of relative frequency or statistical stability of statistics can be regarded as an emergent property.

Hypothesis of perfect statistical stability

At first glance, it seems quite plausible that the sequence of relative frequencies of any real event should tend to a certain value , and the sequence of the sample averages of discrete samples of any real process should have a limit, viz.,. This is the hypothesis of perfect statistical stability. Probability theory is based on this hypothesis.

Criticism of the hypothesis of perfect statistical stability

For many years, the hypothesis of ideal statistical stability was not in doubt, although some scholars
and such famous scientists as
Andrey Markov,
Anatoliy Skorokhod,
Émile Borel,
V. N. Tutubalin ), and others) noticed that, in the real world, this hypothesis is valid only with certain reservations.

The hypothesis of imperfect statistical stability

The possibility of adequate description of relative frequencies of actual events and sample averages of actual discrete samples by the expressions, is only a hypothesis. It does not follow from any experiments and any logical inferences. It is easy to demonstrate that not all processes, even oscillatory type, have the property of perfect statistical stability.
Example 3. In Fig. 3a and Fig. 3c two determinate oscillations are presented and in Fig. 3b and Fig. 3d are shown according to their averages. It is clear from Fig. 3b and Fig. 3d that in both cases, the average does not have a limit, i.e., both processes are statistically unstable.
Experimental studies of various processes of different physical nature over broad observation intervals show that the hypothesis of perfect statistical stability is not confirmed'. The real world is continuously changing, and changes occur at all levels, including the statistical level. Statistical assessments formed on the basis of relatively small observation intervals are relatively stable. Their stability is manifested through a decrease in the fluctuation of statistical estimators when the volume of statistical data grows. This creates an illusion of perfect statistical stability. However, beyond a certain critical volume, the level of fluctuations remains practically unchanged when the amount of the data is increased. This indicates that the statistical stability is not perfect.
Example 4. Non-perfect statistical stability is illustrated Fig. 4
, which presents mains voltage fluctuations over 2.5 days. Note the fluctuation in Fig. 2a shows beginning part of the fluctuation presented in Fig 4a. As can be seen from Fig. 4b, the sample average does not stabilize, even for very long averaging intervals.

Description of the phenomenon of statistical stability

Hilbert’s sixth problem

Until the end of the nineteenth century probability theory was regarded as a physical discipline.
At the Second International Congress of mathematicians David Hilbert gave a speech entitled ‘Mathematical problems’. Here he formulated what he considered to be the twenty three most important problems whose study could significantly stimulate the further development of science. The sixth problem was the mathematical description of the axioms of physics.
In the part of his presentation relating to this problem, Hilbert noted that, in parallel with research on the foundations of geometry, one could approach the problem of an axiomatic construction, along the same lines, of the physical sciences in which mathematics played an exclusive role, and in particular, probability theory and mechanics.
Many scientists have responded to Hilbert’s appeal. Among them were Richard von Mises, who considered the problem from the standpoint of natural science, and Andrey Kolmogorov, who proposed in 1929 the solution based on set theory and measure theory. The axiomatic approach proposed by A. N. Kolmogorov
is now favoured in probability theory. This approach has even been elevated to the rank of a standard.

Description of the phenomenon of statistical stability in the framework of probability theory

Kolmogorov’s probability theory is a typical mathematical discipline. In it, the subject matter is an abstract probability space and the scope of research is the mathematical relationships between its elements. The physical phenomenon of statistical stability of the relative frequency of events which constitutes the foundation of this discipline formally would not then appear to play any role. This phenomenon is taken into account in an idealized form by accepting the axiom of countable additivity, which is equivalent to acceptance of the hypothesis of perfect statistical stability.

Description of the phenomenon of statistical stability in the framework of the theory of hyper-random phenomena

In contrast to the classical mathematical probability theory, the theory of hyper-random phenomena is physical-mathematical one. Its subject matter is phenomenon of statistical stability and the scope of research is adequate description of it by so-called hyper-random models taking into account the violation of statistical stability.
The theory of hyper-random phenomena does not erase the achievements of probability theory and classical mathematical statistics, but complements them, extending the statements of these disciplines to a sphere in which they had not yet been considered where there is no convergence of statistics.

Parameters of statistical stability

There are a number of parameters characterizing the statistical stability, in particular, the parameters of statistical instability with respect to the average, the parameters of statistical instability with respect to the standard deviation, intervals of statistical stability with respect to the average, standard deviation, and other statistics, and so forth. The mathematically correct determination of these parameters and the development of a methodology for their estimation in case of unlimited and limited sample sizes are studied within the framework of the theory of hyper-random phenomena.

The areas of effective use of various approaches for description of the statistical stability phenomenon

The main parameters defining the bounds of the effective using of classical probability theory and the theory of hyper-random phenomena are intervals of statistical stability with respect to various statistics. Within these intervals the violations of statistical stability are negligible and therefore the use of probability theory is possible and reasonable. Outside these intervals the violations of statistical stability are essential and therefore it is necessary to use the methods that take into account these violations, in particular, the methods of the theory of hyper-random phenomena.
The limitations of statistical stability become apparent for large sample sizes and in the passage to the limit. Sample sizes are often small and therefore many practical tasks can be solved with acceptable accuracy using random models. Such models are usually simpler than the hyper-random models, so are preferred for not very large sample sizes.
However, the hyper-random models have obvious advantages over the stochastic and other simpler models in cases when the limited statistical character of statistical stability becomes apparent, usually for long observation intervals and large sample sizes.
Therefore, the primary application of the hyper-random models is to statistically analyze various physical processes of long duration, as well as high precision measurements of various physical quantities and the forecasting of physical processes by statistical processing of large data sets.
Twenty first century research shows that hyper-random models can be useful for solving other tasks too, for instance, in designing of radio electronic equipment.