Equivalence test


Equivalence tests are a variation of hypothesis tests used to draw statistical inferences from observed data. In equivalence tests, the null hypothesis is defined as an effect large enough to be deemed interesting, specified by an equivalence bound. The alternative hypothesis is any effect that is less extreme than said equivalence bound. The observed data is statistically compared against the equivalence bounds. If the statistical test indicates the observed data is surprising, assuming that true effects at least as extreme as the equivalence bounds, a Neyman-Pearson approach to statistical inferences can be used to reject effect sizes larger than the equivalence bounds with a pre-specified Type 1 error rate.
Equivalence testing originates from the field of pharmacokinetics. One application is to show that a new drug that is cheaper than available alternatives works just as well as an existing drug. In essence, equivalence tests consist of calculating a confidence interval around an observed effect size, and rejecting effects more extreme than the equivalence bound when the confidence interval does not overlap with the equivalence bound. In two-sided tests an upper and lower equivalence bound is specified. In non-inferiority trials, where the goal is to test the hypothesis that a new treatment is not worse than existing treatments, only a lower equivalence bound is pre-specified. Equivalence tests can be performed in addition to null-hypothesis significance tests. This might prevent common misinterpretations of p-values larger than the alpha level as support for the absence of a true effect. Furthermore, equivalence tests can identify effects that are statistically significant but practically insignificant, whenever effects are statistically different from zero, but also statistically smaller than any effect size deemed worthwhile.

TOST procedure

"A very simple equivalence testing approach is the ‘two-one-sided t-tests’ procedure. In the TOST procedure an upper and lower equivalence bound is specified based on the smallest effect size of interest. Two composite null hypotheses are tested: H01: Δ ≤ –ΔL and H02: Δ ≥ ΔU. When both these one-sided tests can be statistically rejected, we can conclude that –ΔL < Δ < ΔU, or that the observed effect falls within the equivalence bounds and is statistically smaller than any effect deemed worthwhile, and considered practically equivalent." Alternatives to the TOST procedure have been developed as well. A recent modification to TOST makes the approach feasible in cases of repeated measures and assessing multiple variables.

Comparison between t-test and equivalence test

The equivalence test can, for comparison purposes, be induced from the t-test. Considering a t-test at the significance level αt-test achieving a power of 1-βt-test for a relevant effect size dr, both tests lead to the same inference whenever parameters Δ=dr as well as αequiv.-testt-test and βequiv.-testt-test coincide, i.e. the error types are interchanged between the t-test and the equivalence test. To achieve this for the t-test, either the sample size calculation needs to be carried out correctly, or by adjusting the t-test significance level αt-test, referred to as the so-called revised t-test. Both approaches have difficulties in practice, since sample size planning relies on unverifiable assumptions of the standard deviation, and the revised t-test yields numerical problems. Preserving the test behaviour, those limitations can be removed by using an equivalence test.
The second Figure allows a visual comparison of the equivalence test and the t-test when the sample size calculation is affected by differences between the a priori standard deviation and the sample's standard deviation, which is a common problem. Using an equivalence test instead of a t-test additionally ensures that αequiv.-test is bounded, which the t-test does not do in case that with the type II error growing arbitrary large. On the other hand, having results in the t-test being stricter than the dr specified in the planning, which may randomly penalize the sample source. This makes the equivalence test safer to use.