Foundations of statistics


The foundations of statistics concern the epistemological debate in statistics over how one should conduct inductive inference from data. Among the issues considered in statistical inference are the question of Bayesian inference versus frequentist inference, the distinction between Fisher's "significance testing" and Neyman–Pearson "hypothesis testing", and whether the likelihood principle should be followed. Some of these issues have been debated for up to 200 years without resolution.
Bandyopadhyay & Forster describe four statistical paradigms: " classical statistics or error statistics, Bayesian statistics, likelihood-based statistics, and the Akaikean-Information Criterion-based statistics".
Savage's text Foundations of Statistics has been cited over 15000 times on Google Scholar. It states the following.

Fisher's "significance testing" vs. Neyman–Pearson "hypothesis testing"

In the development of classical statistics in the second quarter of the 20th century two competing models of inductive statistical testing were developed. Their relative merits were hotly debated until Fisher's death. While a hybrid of the two methods is widely taught and used, the philosophical questions raised in the debate have not been resolved.

Significance testing

popularized significance testing, primarily in two popular and highly influential books. Fisher's writing style in these books was strong on examples and relatively weak on explanations. The books lacked proofs or derivations of significance test statistics.
Fisher's more explanatory and philosophical writing was written much later. There appear to be some differences between his earlier practices and his later opinions.
Fisher was motivated to obtain scientific experimental results without the explicit influence of prior opinion. The significance test is a probabilistic version of Modus tollens, a classic form of deductive inference. The significance test might be simplistically stated, "If the evidence is sufficiently discordant with the hypothesis, reject the hypothesis". In application, a statistic is calculated from the experimental data, a probability of exceeding that statistic is determined and the probability is compared to a threshold. The threshold is arbitrary. A common application of the method is deciding whether a treatment has a reportable effect based on a comparative experiment. Statistical significance is a measure of probability not practical importance. It can be regarded as a requirement placed on statistical signal/noise. The method is based on the assumed existence of an imaginary infinite population corresponding to the null hypothesis.
The significance test requires only one hypothesis. The result of the test is to reject the hypothesis, a simple dichotomy. The test distinguish between truth of the hypothesis and insufficiency of evidence to disprove the hypothesis; so it is like a criminal trial in which the defendant's guilt is assessed in.

Hypothesis testing

& Pearson collaborated on a different, but related, problem – selecting among competing hypotheses based on the experimental evidence alone. Of their joint papers, the most cited was from 1933. The famous result of that paper is the Neyman–Pearson lemma. The lemma says that a ratio of probabilities is an excellent criterion for selecting a hypothesis. The paper proved an optimality of Student's t-test. Neyman expressed the opinion that hypothesis testing was a generalization of and an improvement on significance testing. The rationale for their methods is found in their joint papers.
Hypothesis testing requires multiple hypotheses. A hypothesis is always selected, a multiple choice. A lack of evidence is not an immediate consideration. The method is based on the assumption of a repeated sampling of the same population.

Grounds of disagreement

The length of the dispute allowed the debate of a wide range of issues regarded as foundational to statistics.
Fisher's attackNeyman's rebuttalDiscussion
Repeated sampling of the same population
  • Such sampling is the basis of frequentist probability
  • Fisher preferred fiducial inference
Fisher's theory of fiducial inference is flawed
  • Paradoxes are common
  • Fisher's attack on the basis of frequentist probability failed, but was not without result. He identified a specific case where the two schools of testing reach different results. This case is one of several that are still troubling. Commentators believe that the "right" answer is context dependent. Fiducial probability has not fared well, being virtually without advocates, while frequentist probability remains a mainstream interpretation.
    Type II errors
  • Which result from an alternative hypothesis
  • A purely probabilistic theory of tests requires an alternative hypothesisFisher's attack on type II errors has faded with time. In the intervening years statistics has separated the exploratory from the confirmatory. In the current environment, the concept of type II errors is used in power calculations for confirmatory hypothesis test sample size determination.
    Inductive behavior
  • Fisher's attack on inductive behavior has been largely successful because of his selection of the field of battle. While operational decisions are routinely made on a variety of criteria, scientific conclusions from experimentation are typically made on the basis of probability alone.
    In this exchange, Fisher also discussed the requirements for inductive inference, with specific criticism of cost functions penalizing faulty judgements. Neyman countered that Gauss and Laplace used them. This exchange of arguments occurred 15 years after textbooks began teaching a hybrid theory of statistical testing.
    Fisher and Neyman were in disagreement about the foundations of statistics :
    Fisher and Neyman were separated by attitudes and perhaps language. Fisher was a scientist and an intuitive mathematician. Inductive reasoning was natural. Neyman was a rigorous mathematician. He was convinced by deductive reasoning rather by a probability calculation based on an experiment. Thus there was an underlying clash between applied and theoretical, between science and mathematics.

    Related history

    Neyman, who had occupied the same building in England as Fisher, accepted a position on the west coast of the United States of America in 1938. His move effectively ended his collaboration with Pearson and their development of hypothesis testing. Further development was continued by others.
    Textbooks provided a hybrid version of significance and hypothesis testing by 1940. None of the principals had any known personal involvement in the further development of the hybrid taught in introductory statistics today.
    Statistics later developed in different directions including decision theory, Bayesian statistics, exploratory data analysis, robust statistics and nonparametric statistics. Neyman–Pearson hypothesis testing contributed strongly to decision theory which is very heavily used. Hypothesis testing readily generalized to accept prior probabilities which gave it a Bayesian flavor.
    Neyman–Pearson hypothesis testing has become an abstract mathematical subject taught in post-graduate statistics, while most of what is taught to under-graduates and used under the banner of hypothesis testing is from Fisher.

    Contemporary opinion

    No major battles between the two classical schools of testing have erupted for decades, but sniping continues. After generations of dispute, there is virtually no chance that either statistical testing theory will replace the other in the foreseeable future.
    The hybrid of the two competing schools of testing can be viewed very differently – as the imperfect union of two mathematically complementary ideas or as the fundamentally flawed union of philosophically incompatible ideas. Fisher enjoyed some philosophical advantage, while Neyman & Pearson employed the more rigorous mathematics. Hypothesis testing is controversial among some users, but the most popular alternative is based on the same mathematics.
    The history of the development left testing without a single citable authoritative source for the hybrid theory that reflects common statistical
    practice. The merged terminology is also somewhat inconsistent. There is strong empirical evidence that the graduates of an introductory statistics class have a weak understanding of the meaning of hypothesis testing.

    Summary

    Two different interpretations of probability have long existed. Gauss and Laplace could have debated alternatives more than 200 years ago. Two competing schools of statistics have developed as a consequence.
    Classical inferential statistics was largely developed in the second quarter of the 20th century, much of it in reaction to the probability of the time which utilized the controversial principle of indifference to establish prior probabilities. The rehabilitation of Bayesian inference was a reaction to the limitations of frequentist probability. More reactions followed. While the philosophical interpretations are old, the statistical terminology is not. The current statistical terms "Bayesian" and "frequentist" stabilized in the second half of the 20th century.
    The terminology is confusing: the "classical" interpretation of probability is Bayesian while "classical" statistics is frequentist. "Frequentist" also has varying interpretations—different in philosophy than in physics.
    The nuances of philosophical probability interpretations are discussed elsewhere. In statistics the alternative interpretations enable the analysis of different data using different methods based on different models to achieve slightly different goals. Any statistical comparison of the competing schools considers pragmatic criteria beyond the philosophical.

    Major contributors

    Two major contributors to frequentist methods were Fisher and Neyman. Fisher's interpretation of probability was idiosyncratic. Neyman's views were rigorously frequentist. Three major contributors to 20th century Bayesian statistical philosophy, mathematics and methods were de Finetti, Jeffreys and Savage. Savage popularized de Finetti's ideas in the English-speaking world and made Bayesian mathematics rigorous. In 1965, Dennis Lindley's 2 volume work "Introduction to Probability and Statistics from a Bayesian Viewpoint" brought Bayesian methods to a wide audience. Statistics has advanced over the past three generations; The "authoritative" views of the early contributors are not all current.

    Contrasting approaches

    Frequentist inference

    Frequentist inference is partially and tersely described above in. Frequentist inference combines several different views. The result is capable of supporting scientific conclusions, making operational decisions and estimating parameters with or without confidence intervals. Frequentist inference is based solely on the evidence.

    Bayesian inference

    A classical frequency distribution describes the probability of the data. The use of Bayes' theorem allows a more abstract concept – the probability of a hypothesis given the data. The concept was once known as "inverse probability". Bayesian inference updates the probability estimate for a hypothesis as additional evidence is acquired. Bayesian inference is explicitly based on the evidence and prior opinion, which allows it to be based on multiple sets of evidence.

    Comparisons of characteristics

    Frequentists and Bayesians use different models of probability. Frequentists often consider parameters to be fixed but unknown while Bayesians assign probability distributions to similar parameters. Consequently, Bayesians speak of probabilities that don't exist for frequentists; A Bayesian speaks of the probability of a theory while a true frequentist can speak only of the consistency of the evidence with the theory. Example: A frequentist does not say that there is a 95% probability that the true value of a parameter lies within a confidence interval, saying instead that 95% of confidence intervals contain the true value.

    Mathematical results

    Neither school is immune from mathematical criticism and neither accepts it without a struggle. Stein's paradox illustrated that finding a "flat" or "uninformative" prior probability distribution in high dimensions is subtle. Bayesians regard that as peripheral to the core of their philosophy while finding frequentism to be riddled with inconsistencies, paradoxes and bad mathematical behavior. Frequentists can explain most. Some of the "bad" examples are extreme situations - such as estimating the weight of a herd of elephants from measuring the weight of one, which allows no statistical estimate of the variability of weights. The likelihood principle has been a battleground.

    Statistical results

    Both schools have achieved impressive results in solving real-world problems. Classical statistics effectively has the longer record because numerous results were obtained with mechanical calculators and printed tables of special statistical functions. Bayesian methods have been highly successful in the analysis of information that is naturally sequentially sampled. Many Bayesian methods and some recent frequentist methods require the computational power widely available only in the last several decades.
    There is active discussion about combining Bayesian and frequentist methods, but reservations are expressed about the meaning of the results and reducing the diversity of approaches.

    Philosophical results

    Bayesians are united in opposition to the limitations of frequentism, but are philosophically divided into numerous camps, each with a different emphasis. One philosopher of statistics has noted a retreat from the statistical field to philosophical probability interpretations over the last two generations. There is a perception that successes in Bayesian applications do not justify the supporting philosophy. Bayesian methods often create useful models that are not used for traditional inference and which owe little to philosophy. None of the philosophical interpretations of probability appears robust. The frequentist view is too rigid and limiting while the Bayesian view can be simultaneously objective and subjective, etc.

    Illustrative quotations

    Likelihood is a synonym for probability in common usage. In statistics that is not true. A probability refers to variable data for a fixed hypothesis while a likelihood refers to variable hypotheses for a fixed set of data. Repeated measurements of a fixed length with a ruler generate a set of observations. Each fixed set of observational conditions is associated with a probability distribution and each set of observations can be interpreted as a sample from that distribution – the frequentist view of probability. Alternatively a set of observations may result from sampling any of a number of distributions. The probabilistic relationship between a fixed sample and a variable distribution is termed likelihood – a Bayesian view of probability. A set of length measurements may imply readings taken by careful, sober, rested, motivated observers in good lighting.
    A likelihood is a probability by another name which exists because of the limited frequentist definition of probability. Likelihood is a concept introduced and advanced by Fisher for more than 40 years. The concept was accepted and substantially changed by Jeffreys. In 1962 Birnbaum "proved" the likelihood principle from premises acceptable to most statisticians. The "proof" has been disputed by statisticians and philosophers. The principle says that all of the information in a sample is contained in the likelihood function, which is accepted as a valid probability distribution by Bayesians.
    Some significance tests are not consistent with the likelihood principle. Bayesians accept the principle which is consistent with their philosophy. "he likelihood approach is compatible with Bayesian statistical inference in the sense that the posterior Bayes distribution for a parameter is, by Bayes's Theorem, found by multiplying the prior distribution by the likelihood function." Frequentists interpret the principle adversely to Bayesians as implying no concern about the reliability of evidence. "The likelihood principle of Bayesian statistics implies that information about the experimental design from which evidence is collected does not enter into the statistical analysis of the data." Many Bayesians recognize that implication as a vulnerability.
    The likelihood principle has become an embarrassment to both major
    philosophical schools of statistics; It has weakened both rather than favoring either. Its strongest supporters claim that it offers a better foundation for statistics than either of the two schools. "ikelihood looks very good indeed when it is compared with these alternatives." These supporters include statisticians and philosophers of science. While Bayesians acknowledge the importance of likelihood for calculation, they believe that the posterior probability distribution is the proper basis for inference.

    Modeling

    Inferential statistics is based on statistical models. Much of classical hypothesis testing, for example, was based on the assumed normality of the data. Robust and nonparametric statistics were developed to reduce the dependence on that assumption. Bayesian statistics interprets new observations from the perspective of prior knowledge – assuming a modeled continuity between past and present. The design of experiments assumes some knowledge of those factors to be controlled, varied, randomized and observed. Statisticians are well aware of the difficulties in proving causation, saying "correlation does not imply causation".
    More complex statistics utilizes more complex models, often with the intent of finding a latent structure underlying a set of variables. As models and data sets have grown in complexity, foundational questions have been raised about the justification of the models and the validity of inferences drawn from them. The range of conflicting opinion expressed about modeling is large.
    In the absence of a strong philosophical consensus review of statistical modeling, many statisticians accept the cautionary words of statistician George Box: "All models are wrong, but some are useful."

    Other reading

    For a short introduction to the foundations of statistics, see
    In his book Statistics as Principled Argument, Robert P. Abelson articulates the position that statistics serves as a standardized means of settling disputes between scientists who could otherwise each argue the merits of their own positions ad infinitum. From this point of view, statistics is a form of rhetoric; as with any means of settling disputes, statistical methods can succeed only as long as all parties agree on the approach used.

    Footnotes

    Citations