In statistics, the projection matrix, sometimes also called the influence matrix or hat matrix, maps the vector of response values to the vector of fitted values. It describes the influence each response value has on each fitted value. The diagonal elements of the projection matrix are the leverages, which describe the influence each response value has on the fitted value for that same observation.
Overview
If the vector of response values is denoted by and the vector of fitted values by, As is usually pronounced "y-hat", the projection matrix is also named hat matrix as it "puts a hat on ". The formula for the vector of residuals can also be expressed compactly using the projection matrix: where is the identity matrix. The matrix is sometimes referred to as the residual maker matrix. Moreover, the element in the ith row and jth column of is equal to the covariance between the jth response value and the ith fitted value, divided by the variance of the former: Therefore, the covariance matrix of the residuals, by error propagation, equals where, this reduces to:
When the weights for each observation are identical and the errors are uncorrelated, the estimated parameters are so the fitted values are Therefore, the projection matrix is given by
The above may be generalized to the cases where the weights are not identical and/or the errors are correlated. Suppose that the covariance matrix of the errors is Ψ. Then since the hat matrix is thus and again it may be seen that, though now it is no longer symmetric.
The eigenvalues of consist of rones and zeros, while the eigenvalues of consist of ones and r zeros.
is invariant under : hence.
is unique for certain subspaces.
The projection matrix corresponding to a linear model is symmetric and idempotent, that is,. However, this is not always the case; in locally weighted scatterplot smoothing, for example, the hat matrix is in general neither symmetric nor idempotent. For linear models, the trace of the projection matrix is equal to the rank of, which is the number of independent parameters of the linear model. For other models such as LOESS that are still linear in the observations, the projection matrix can be used to define the effective degrees of freedom of the model. Practical applications of the projection matrix in regression analysis include leverage and Cook's distance, which are concerned with identifying influential observations, i.e. observations which have a large effect on the results of a regression.
Blockwise formula
Suppose the design matrix can be decomposed by columns as. Define the hat or projection operator as. Similarly, define the residual operator as. Then the projection matrix can be decomposed as follows: where, e.g., and. There are a number of applications of such a decomposition. In the classical application is a column of all ones, which allows one to analyze the effects of adding an intercept term to a regression. Another use is in the fixed effects model, where is a large sparse matrix of the dummy variables for the fixed effect terms. One can use this partition to compute the hat matrix of without explicitly forming the matrix, which might be too large to fit into computer memory.