Kuder–Richardson Formula 20


In psychometrics, the Kuder–Richardson Formula 20 , first published in 1937, is a measure of internal consistency reliability for measures with dichotomous choices. It was developed by Kuder and Richardson. The name of this formula stems from the fact that is the twentieth formula discussed in Kuder and Richardson's seminal paper on test reliability.
It is a special case of Cronbach's α, computed for dichotomous scores. It is often claimed that a high KR-20 coefficient indicates a homogeneous test. However, like Cronbach's α, homogeneity is actually an assumption, not a conclusion, of reliability coefficients. It is possible, for example, to have a high KR-20 with a multidimensional scale, especially with a large number of items.
Values can range from 0.00 to 1.00, with high values indicating that the examination is likely to correlate with alternate forms. The KR-20 may be affected by difficulty of the test, the spread in scores and the length of the examination.
In the case when scores are not tau-equivalent then the KR-20 is an indication of the lower bound of internal consistency.
The formula for KR-20 for a test with K test items numbered i=1 to K is
where pi is the proportion of correct responses to test item i, qi is the proportion of incorrect responses to test item i, and the variance for the denominator is
where n is the total sample size.
If it is important to use unbiased operators then the sum of squares should be divided by degrees of freedom and the probabilities are multiplied by

Kuder-Richardson Formula 21

Often discussed in tandem with KR-20, is Kuder-Richardson Formula 21. KR-21 is a simplified version of KR-20, which can be used when the difficulty of all items on the test are known to be equal. Like KR-20, KR-21 was first set forth as the twenty-first formula discussed in Kuder and Richardson's 1937 paper.
The formula for KR-21 is as such:
Similarly to KR-20, K is equal to the number of items. Difficulty level of the items, is assumed to be the same for each item, however, in practice, KR-21 can be applied by finding the average item difficulty across the entirety of the test. KR-21 tends to be a more conservative estimate of reliability than KR-20, which in turn is a more conservative estimate than Cronbach's α.