Partial least squares regression is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a linear regression model by projecting the predicted variables and the observable variables to a new space. Because both the X and Y data are projected to new spaces, the PLS family of methods are known as bilinear factor models. Partial least squaresdiscriminant analysis is a variant used when the Y is categorical. PLS is used to find the fundamental relations between two matrices, i.e. a latent variable approach to modeling the covariance structures in these two spaces. A PLS model will try to find the multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space. PLS regression is particularly suited when the matrix of predictors has more variables than observations, and when there is multicollinearity among X values. By contrast, standard regression will fail in these cases. Partial least squares was introduced by the Swedish statistician Herman O. A. Wold, who then developed it with his son, Svante Wold. An alternative term for PLS is projection to latent structures, but the term partial least squares is still dominant in many areas. Although the original applications were in the social sciences, PLS regression is today most widely used in chemometrics and related areas. It is also used in bioinformatics, sensometrics, neuroscience, and anthropology.
Underlying model
The general underlying model of multivariate PLS is where is an matrix of predictors, is an matrix of responses; and are matrices that are, respectively, projections of and projections of ; and are, respectively, and orthogonal loading matrices; and matrices and are the error terms, assumed to be independent and identically distributed random normal variables. The decompositions of and are made so as to maximise the covariance between and.
Algorithms
A number of variants of PLS exist for estimating the factor and loading matrices and. Most of them construct estimates of the linear regression between and as. Some PLS algorithms are only appropriate for the case where is a column vector, while others deal with the general case of a matrix. Algorithms also differ on whether they estimate the factor matrix as an orthogonal, an orthonormal matrix or not. The final prediction will be the same for all these varieties of PLS, but the components will differ.
PLS1
PLS1 is a widely used algorithm appropriate for the vector case. It estimates as an orthonormal matrix. In pseudocode it is expressed below : 1 2 3 , an initial estimate of. 4 5 6 7 8 9 10 11 12 13 14 15 16 define to be the matrix Do the same to form the matrix and vector. 17 18 19 This form of the algorithm does not require centering of the input and, as this is performed implicitly by the algorithm. This algorithm features 'deflation' of the matrix , but deflation of the vector is not performed, as it is not necessary. The user-supplied variable is the limit on the number of latent factors in the regression; if it equals the rank of the matrix, the algorithm will yield the least squares regression estimates for and
Extensions
In 2002 a new method was published called orthogonal projections to latent structures. In OPLS, continuous variable data is separated into predictive and uncorrelated information. This leads to improved diagnostics, as well as more easily interpreted visualization. However, these changes only improve the interpretability, not the predictivity, of the PLS models. L-PLS extends PLS regression to 3 connected data blocks. Similarly, OPLS-DA may be applied when working with discrete variables, as in classification and biomarker studies. In 2015 partial least squares was related to a procedure called the three-pass regression filter. Supposing the number of observations and variables are large, the 3PRF is asymptotically normal for the "best" forecast implied by a linear latent factor model. In stock market data, PLS has been shown to provide accurate out-of-sample forecasts of returns and cash-flow growth. A PLS version based on singular value decomposition provides a memory efficient implementation that can be used to address high-dimensional problems, such as relating millions of genetic markers to thousands of imaging features in imaging genetics, on consumer-grade hardware. PLS correlation is another methodology related to PLS regression, which has been used in neuroimaging and more recently in sport science, to quantify the strength of the relationship between data sets. Typically, PLSC divides the data into two blocks each containing one or more variables, and then uses singular value decomposition to establish the strength of any relationship that might exist between the two component sub-groups. It does this by using SVD to determine the inertia of the covariance matrix of the sub-groups under consideration.