Total least squares


In applied statistics, total least squares is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalization of Deming regression and also of orthogonal regression, and can be applied to both linear and non-linear models.
The total least squares approximation of the data is generically equivalent to the best, in the Frobenius norm, low-rank approximation of the data matrix.

Linear model

Background

In the least squares method of data modeling, the objective function, S,
is minimized, where r is the vector of residuals and W is a weighting matrix. In linear least squares the model contains equations which are linear in the parameters appearing in the parameter vector, so the residuals are given by
There are m observations in y and n parameters in β with m>n. X is a m×n matrix whose elements are either constants or functions of the independent variables, x. The weight matrix W is, ideally, the inverse of the variance-covariance matrix of the observations y. The independent variables are assumed to be error-free. The parameter estimates are found by setting the gradient equations to zero, which results in the normal equations

Allowing observation errors in all variables

Now, suppose that both x and y are observed subject to error, with variance-covariance matrices and respectively. In this case the objective function can be written as
where and are the residuals in x and y respectively. Clearly these residuals cannot be independent of each other, but they must be constrained by some kind of relationship. Writing the model function as, the constraints are expressed by m condition equations.
Thus, the problem is to minimize the objective function subject to the m constraints. It is solved by the use of Lagrange multipliers. After some algebraic manipulations, the result is obtained.
or alternatively
where M is the variance-covariance matrix relative to both independent and dependent variables.

Example

When the data errors are uncorrelated, all matrices M and W are diagonal. Then, take the example of straight line fitting.
in this case
showing how the variance at the ith point is determined by the variances of both independent and dependent variables and by the model being used to fit the data. The expression may be generalized by noting that the parameter is the slope of the line.
An expression of this type is used in fitting pH titration data where a small error on x translates to a large error on y when the slope is large.

Algebraic point of view

As was shown in 1980 by Golub and Van Loan, the TLS problem does not have a solution in general. The following considers the simple case where a unique solution exists without making any particular assumptions.
The computation of the TLS using singular value decomposition is described in standard texts. We can solve the equation
for B where X is m-by-n and Y is m-by-k.
That is, we seek to find B that minimizes error matrices E and F for X and Y respectively. That is,
where is the augmented matrix with E and F side by side and is the Frobenius norm, the square root of the sum of the squares of all entries in a matrix and so equivalently the square root of the sum of squares of the lengths of the rows or columns of the matrix.
This can be rewritten as
where is the identity matrix.
The goal is then to find that reduces the rank of by k. Define to be the singular value decomposition of the augmented matrix.
where V is partitioned into blocks corresponding to the shape of X and Y.
Using the Eckart–Young theorem, the approximation minimising the norm of the error is such that matrices and are unchanged, while the smallest singular values are replaced with zeroes. That is, we want
so by linearity,
We can then remove blocks from the U and Σ matrices, simplifying to
This provides E and F so that
Now if is nonsingular, which is not always the case, we can then right multiply both sides by to bring the bottom block of the right matrix to the negative identity, giving
and so
A naive GNU Octave implementation of this is:

function B = tls
= size; % n is the width of X
Z = ; % Z is X augmented with Y.
= svd; % find the SVD of Z.
VXY = V; % Take the block of V consisting of the first n rows and the n+1 to last column
VYY = V; % Take the bottom-right block of V.
B = -VXY/VYY;
end

The way described above of solving the problem, which requires that the matrix is nonsingular, can be slightly extended by the so-called classical TLS algorithm.

Computation

The standard implementation of classical TLS algorithm is available through , see also. All modern implementations based, for example, on solving a sequence of ordinary least squares problems, approximate the matrix , as introduced by Van Huffel and Vandewalle. It is worth noting, that this is, however, not the TLS solution in many cases.

Non-linear model

For non-linear systems similar reasoning shows that the normal equations for an iteration cycle can be written as

Geometrical interpretation

When the independent variable is error-free a residual represents the "vertical" distance between the observed data point and the fitted curve. In total least squares a residual represents the distance between a data point and the fitted curve measured along some direction. In fact, if both variables are measured in the same units and the errors on both variables are the same, then the residual represents the shortest distance between the data point and the fitted curve, that is, the residual vector is perpendicular to the tangent of the curve. For this reason, this type of regression is sometimes called two dimensional Euclidean regression or orthogonal regression.

Scale invariant methods

A serious difficulty arises if the variables are not measured in the same units. First consider measuring distance between a data point and the line: what are the measurement units for this distance? If we consider measuring distance based on Pythagoras' Theorem then it is clear that we shall be adding quantities measured in different units, which is meaningless. Secondly, if we rescale one of the variables e.g., measure in grams rather than kilograms, then we shall end up with different results. To avoid these problems it is sometimes suggested that we convert to dimensionless variables—this may be called normalization or standardization. However there are various ways of doing this, and these lead to fitted models which are not equivalent to each other. One approach is to normalize by known measurement precision thereby minimizing the Mahalanobis distance from the points to the line, providing a maximum-likelihood solution; the unknown precisions could be found via analysis of variance.
In short, total least squares does not have the property of units-invariance—i.e. it is not scale invariant. For a meaningful model we require this property to hold. A way forward is to realise that residuals measured in different units can be combined if multiplication is used instead of addition. Consider fitting a line: for each data point the product of the vertical and horizontal residuals equals twice the area of the triangle formed by the residual lines and the fitted line. We choose the line which minimizes the sum of these areas. Nobel laureate Paul Samuelson proved in 1942 that, in two dimensions, it is the only line expressible solely in terms of the ratios of standard deviations and the correlation coefficient which fits the correct equation when the observations fall on a straight line, exhibits scale invariance, and exhibits invariance under interchange of variables. This solution has been rediscovered in different disciplines and is variously known as standardised major axis, the reduced major axis, the geometric mean functional relationship, least products regression, diagonal regression, line of organic correlation, and the least areas line. Tofallis has extended this approach to deal with multiple variables.

Others