Randomness tests


Randomness tests, in data evaluation, are used to analyze the distribution of a set of data to see if it can be described as random. In stochastic modeling, as in some computer simulations, the hoped-for randomness of potential input data can be verified, by a formal test for randomness, to show that the data are valid for use in simulation runs. In some cases, data reveals an obvious non-random pattern, as with so-called "runs in the data". If a selected set of data fails the tests, then parameters can be changed or other randomized data can be used which does pass the tests for randomness.

Background

The issue of randomness is an important philosophical and theoretical question. Tests for randomness can be used to determine whether a data set has a recognisable pattern, which would indicate that the process that generated it is significantly non-random. For the most part, statistical analysis has, in practice, been much more concerned with finding regularities in data as opposed to testing for randomness. However, over the past century, a variety of tests of randomness have been proposed, especially in the context of games of chance and their rules. Most often the tests are applied not directly to sequences of 0's and 1's, but instead to numbers obtained from blocks of 8 elements.
Many "random number generators" in use today are defined algorithms, and so are actually pseudo-random number generators. The sequences they produce are called pseudo-random sequences. These generators do not always generate sequences which are sufficiently random, but instead can produce sequences which contain patterns. For example, the infamous RANDU routine fails many randomness tests dramatically, including the spectral test.
Stephen Wolfram used randomness tests on the output of Rule 30 to examine its potential for generating random numbers, though it was shown to have an effective key size far smaller than its actual size and to perform poorly on a chi-squared test. The use of an ill-conceived random number generator can put the validity of an experiment in doubt by violating statistical assumptions. Though there are commonly used statistical testing techniques such as NIST standards, Yongge Wang showed that NIST standards are not sufficient. Furthermore, Yongge Wang designed statistical–distance–based and law–of–the–iterated–logarithm–based testing techniques. Using this technique, Yongge Wang and Tony Nicol detected the weakness in commonly used pseudorandom generators such as the well known Debian version of OpenSSL pseudorandom generator which was fixed in 2008.

Specific tests for randomness

There have been a fairly small number of different types of random number generators used in practice. They can be found in the list of random number generators, and have included:
These different generators have varying degrees of success in passing the accepted test suites. Several widely used generators fail the tests more or less badly, while other 'better' and prior generators have been largely ignored.
There are many practical measures of randomness for a binary sequence. These include measures based on statistical tests, transforms, and complexity or a mixture of these. A well-known and widely-used collection of tests was the Diehard Battery of Tests, introduced by Marsaglia; this was extended to the TestU01 suite by L'Ecuyer and Simard. The use of Hadamard transform to measure randomness was proposed by S. Kak and developed further by Phillips, Yuen, Hopkins, Beth and Dai, Mund, and Marsaglia and Zaman.
Several of these tests, which are of linear complexity, provide spectral measures of randomness. T. Beth and Z-D. Dai purported to show that Kolmogorov complexity and linear complexity are practically the same, although Y. Wang later showed their claims are incorrect. Nevertheless, Wang also demonstrated that for Martin-Löf random sequences, the Kolmogorov complexity is essentially the same as linear complexity.
These practical tests make it possible to compare the randomness of strings. On probabilistic grounds, all strings of a given length have the same randomness. However different strings have a different Kolmogorov complexity. For example, consider the following two strings.
String 1 admits a short linguistic description: "32 repetitions of '01'". This description has 22 characters, and it can be efficiently constructed out of some basis sequences. String 2 has no obvious simple description other than writing down the string itself, which has 64 characters, and it has no comparably efficient basis function representation. Using linear Hadamard spectral tests, the first of these sequences will be found to be of much less randomness than the second one, which agrees with intuition.

Notable software implementations