Heteroscedasticity


In statistics, a vector of random variables is heteroscedastic if the variability of the random disturbance is different across elements of the vector. Here, variability could be quantified by the variance or any other measure of statistical dispersion. Thus heteroscedasticity is the absence of homoscedasticity.
A typical example is the set of observations of income in different cities.
The existence of heteroscedasticity is a major concern in regression analysis and the analysis of variance, as it invalidates statistical tests of significance that assume that the modelling errors all have the same variance. While the ordinary least squares estimator is still unbiased in the presence of heteroscedasticity, it is inefficient and generalized least squares should be used instead.
Because heteroscedasticity concerns expectations of the second moment of the errors, its presence is referred to as misspecification of the second order.
The econometrician Robert Engle won the 2003 Nobel Memorial Prize for Economics for his studies on regression analysis in the presence of heteroscedasticity, which led to his formulation of the autoregressive conditional heteroscedasticity modeling technique.

Definition

Consider the regression equation where the dependent random variable equals the deterministic variable times coefficient plus a random disturbance term that has mean zero. The disturbances are homoskedastic if the variance of is a constant ; otherwise, they are heteroskedastic. In particular, the disturbances are heteroskedastic if the variance of depends on i or on the value of. One way they might be heteroskedastic is if , so the variance is proportional to the value of x.
More generally, if the variance-covariance matrix of disturbance across i has a nonconstant diagonal, the disturbance is heteroskedastic. The matrices below are covariances when there are just three observations across time. The disturbance in matrix A is homoskedastic; this is the simple case where OLS is the best linear unbiased estimator. The disturbances in matrices B and C are heteroskedastic. In matrix B, the variance is time-varying, increasing steadily across time; in matrix C, the variance depends on the value of x. The disturbance in matrix D is homoskedastic because the diagonal variances are constant, even though the off-diagonal covariances are non-zero and ordinary least squares is inefficient for a different reason: serial correlation.

Consequences

One of the assumptions of the classical linear regression model is that there is no heteroscedasticity. Breaking this assumption means that the Gauss–Markov theorem does not apply, meaning that OLS estimators are not the Best Linear Unbiased Estimators and their variance is not the lowest of all other unbiased estimators.
Heteroscedasticity does not cause ordinary least squares coefficient estimates to be biased, although it can cause ordinary least squares estimates of the variance of the coefficients to be biased, possibly above or below the true or population variance. Thus, regression analysis using heteroscedastic data will still provide an unbiased estimate for the relationship between the predictor variable and the outcome, but standard errors and therefore inferences obtained from data analysis are suspect. Biased standard errors lead to biased inference, so results of hypothesis tests are possibly wrong. For example, if OLS is performed on a heteroscedastic data set, yielding biased standard error estimation, a researcher might fail to reject a null hypothesis at a given significance level, when that null hypothesis was actually uncharacteristic of the actual population.
Under certain assumptions, the OLS estimator has a normal asymptotic distribution when properly normalized and centered. This result is used to justify using a normal distribution, or a chi square distribution, when conducting a hypothesis test. This holds even under heteroscedasticity. More precisely, the OLS estimator in the presence of heteroscedasticity is asymptotically normal, when properly normalized and centered, with a variance-covariance matrix that differs from the case of homoscedasticity. In 1980, White proposed a consistent estimator for the variance-covariance matrix of the asymptotic distribution of the OLS estimator. This validates the use of hypothesis testing using OLS estimators and White's variance-covariance estimator under heteroscedasticity.
Heteroscedasticity is also a major practical issue encountered in ANOVA problems.
The F test can still be used in some circumstances.
However, it has been said that students in econometrics should not overreact to heteroscedasticity. One author wrote, "unequal error variance is worth correcting only when the problem is severe." In addition, another word of caution was in the form, "heteroscedasticity has never been a reason to throw out an otherwise good model." With the advent of heteroscedasticity-consistent standard errors allowing for inference without specifying the conditional second moment of error term, testing conditional homoscedasticity is not as important as in the past.
For any non-linear model, however, heteroscedasticity has more severe consequences: the maximum likelihood estimates of the parameters will be biased, as well as inconsistent. Yet, in the context of binary choice models, heteroscedasticity will only result in a positive scaling effect on the asymptotic mean of the misspecified MLE. As a result, the predictions which are based on the misspecified MLE will remain correct. In addition, the misspecified Probit and Logit MLE will be asymptotically normally distributed which allows performing the usual significance tests. However, regarding the general hypothesis testing, as pointed out by Greene, “simply computing a robust covariance matrix for an otherwise inconsistent estimator does not give it redemption. Consequently, the virtue of a robust covariance matrix in this setting is unclear.”

Detection

There are several methods to test for the presence of heteroscedasticity. Although tests for heteroscedasticity between groups can formally be considered as a special case of testing within regression models, some tests have structures specific to this case.
;Tests in regression
;Tests for grouped data
These tests consist of a test statistic, a hypothesis that is going to be tested, an alternative hypothesis, and a statement about the distribution of statistic under the null hypothesis.
Many introductory statistics and econometrics books, for pedagogical reasons, present these tests under the assumption that the data set in hand comes from a normal distribution. A great misconception is the thought that this assumption is necessary. Most of the methods of detecting heteroscedasticity outlined above can be modified for use even when the data do not come from a normal distribution. In many cases, this assumption can be relaxed, yielding a test procedure based on the same or similar test statistics but with the distribution under the null hypothesis evaluated by alternative routes: for example, by using asymptotic distributions which can be obtained from asymptotic theory, or by using resampling.

Fixes

There are four common corrections for heteroscedasticity. They are:
Heteroscedasticity often occurs when there is a large difference among the sizes of the observations.
The study of heteroscedasticity has been generalized to the multivariate case, which deals with the covariances of vector observations instead of the variance of scalar observations. One version of this is to use covariance matrices as the multivariate measure of dispersion. Several authors have considered tests in this context, for both regression and grouped-data situations. Bartlett's test for heteroscedasticity between grouped data, used most commonly in the univariate case, has also been extended for the multivariate case, but a tractable solution only exists for 2 groups. Approximations exist for more than two groups, and they are both called Box's M test.