Wilcoxon signed-rank test


The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used to compare two related samples, matched samples, or repeated measurements on a single sample to assess whether their population mean ranks differ. It can be used as an alternative to the paired Student's t-test when the distribution of the difference between two samples' means cannot be assumed to be normally distributed. A Wilcoxon signed-rank test is a nonparametric test that can be used to determine whether two dependent samples were selected from populations having the same distribution.

History

The test is named for Frank Wilcoxon who, in a single paper, proposed both it and the rank-sum test for two independent samples. The test was popularized by Sidney Siegel in his influential textbook on non-parametric statistics. Siegel used the symbol T for a value related to, but not the same as,. In consequence, the test is sometimes referred to as the Wilcoxon T test, and the test statistic is reported as a value of T.

Assumptions

  1. Data are paired and come from the same population.
  2. Each pair is chosen randomly and independently.
  3. The data are measured on at least an interval scale when, as is usual, within-pair differences are calculated to perform the test.

    Test procedure

Let be the sample size, i.e., the number of pairs. Thus, there are a total of 2N data points. For pairs, let and denote the measurements.
  1. For, calculate and, where is the sign function.
  2. Exclude pairs with. Let be the reduced sample size.
  3. Order the remaining pairs from smallest absolute difference to largest absolute difference,.
  4. Rank the pairs, starting with the pair with the smallest non-zero absolute difference as 1. Ties receive a rank equal to the average of the ranks they span. Let denote the rank.
  5. Calculate the test statistic
  6. :, the sum of the signed ranks.
  7. Under null hypothesis, follows a specific distribution with no simple expression. This distribution has an expected value of 0 and a variance of.
  8. : can be compared to a critical value from a reference table.
  9. : The two-sided test consists in rejecting if.
  10. As increases, the sampling distribution of converges to a normal distribution. Thus,
  11. : For, a z-score can be calculated as, where.
  12. : To perform a two-sided test, reject if.
  13. :
  14. : Alternatively, one-sided tests can be performed with either the exact or the approximate distribution. p-values can also be calculated.
  15. For the exact distribution needs to be used.

    Example

is the sign function, is the absolute value, and is the rank. Notice that pairs 3 and 9 are tied in absolute value. They would be ranked 1 and 2, so each gets the average of those ranks, 1.5.

Historical ''T'' statistic

In historical sources a different statistic, denoted by Siegel as the T statistic, was used. The T statistic is the smaller of the two sums of ranks of given sign; in the example, therefore, T would equal 3+4+5+6=18. Low values of T are required for significance. T is easier to calculate by hand than W and the test is equivalent to the two-sided test described above; however, the distribution of the statistic under has to be adjusted.
Note: Critical T values by values of can be found in appendices of statistics textbooks, for example in Table B-3 of Nonparametric Statistics: A Step-by-Step Approach, 2nd Edition by Dale I. Foreman and Gregory W. Corder
.

Limitation

As demonstrated in the example, when the difference between the groups is zero, the observations are discarded. This is of particular concern if the samples are taken from a discrete distribution. In these scenarios the modification to the Wilcoxon test by Pratt 1959, provides an alternative which incorporates the zero differences. This modification is more robust for data on an ordinal scale.

Effect size

To compute an effect size for the signed-rank test, one can use the rank-biserial correlation.
If the test statistic W is reported, the rank correlation r is equal to the test statistic W divided by the total rank sum S, or r = W/S.
Using the above example, the test statistic is W = 9. The sample size of 9 has a total rank sum of S = = 45. Hence, the rank correlation is 9/45, so r = 0.20.
If the test statistic T is reported, an equivalent way to compute the rank correlation is with the difference in proportion between the two rank sums, which is the Kerby simple difference formula. To continue with the current example, the sample size is 9, so the total rank sum is 45. T is the smaller of the two rank sums, so T is 3 + 4 + 5 + 6 = 18. From this information alone, the remaining rank sum can be computed, because it is the total sum S minus T, or in this case 45 - 18 = 27. Next, the two rank-sum proportions are 27/45 = 60% and 18/45 = 40%. Finally, the rank correlation is the difference between the two proportions, hence r =.20.

Software implementations