Linear Least-Squares Fit

DESCRIPTION:
Fits a (weighted) least squares multivariate regression. A list of the estimated coefficients and residuals as well as the QR decomposition of the matrix of explanatory variables is returned.

USAGE:
lsfit(x, y, wt=<<see below>>, intercept=T, tolerance=1.e-07,
      yname=NULL)

REQUIRED ARGUMENTS:
x:
vector or matrix of explanatory variables. If a matrix, each column represents a variable and each row represents an observation (or case). This should not contain column of 1s unless the argument intercept is FALSE. The number of rows of x should equal the number of observations in y, and there should be fewer columns than rows. NAs and Infs are allowed but will be removed.
y:
response variable(s): a vector or a matrix with one column for each regression. NAs and Infs are allowed but will be removed.

OPTIONAL ARGUMENTS:
wt:
vector of weights with length equal to the number of observations. If the different observations have non-equal variances, wt should be inversely proportional to the variance. By default, an unweighted regression is carried out. NAs and Infs are allowed but will be removed.
intercept:
if TRUE, a constant (intercept) term is included in each regression.
tolerance:
numerical value used to test for singularity in the regression.
yname:
vector of names to be used for the y variates in the regression output. However, if y is a matrix with dimnames attribute containing column names, then these will be used.

VALUE:
a list representing the result of the regression, with the following components:
coef:
vector or matrix of coefficients. This is a matrix only if y has more than one column, in which case coef contains one column for each regression with optional constant terms in the first row. Its dimnames are taken from x, y and yname if applicable.
residuals:
object like y containing residuals.
wt:
if wt was given as an argument, it is also returned as part of the result.
intercept:
logical value: records whether an intercept was used in this regression.
qr:
object representing the numerical decomposition of the x matrix (plus a column of 1s, if an intercept was included). If wt was specified, the qr object will represent the decomposition of the weighted x matrix. See function qr for the details of this object. It is used primarily with functions like qr.qty, that compute auxiliary results for the regression from the decomposition.

DETAILS:
An observation is considered unusable if there is an NA or Inf in any response variable, any explanatory variable or in the weight (if present) for the observation. If your data have several missing values, there may be much better ways of analyzing your data than throwing out the observations like this; see, for instance, chapter 10 of Weisberg (1985).

The lsfit function does least squares regression, that is, it finds a set of parameters such that the (weighted) sum of squared residuals is minimized. The (implicit) assumption of least squares is that the errors have a Gaussian distribution - if there are outliers, the results of the regression may be misleading.

The assumptions of regression are that the observations are statistically independent, the response y is linear in the covariates represented by x, and that there is no error in x.

A time series model is one alternative if the observations are not independent. The linearity assumption is loosened in ace, avas and ppreg. A robust regression can help if there are gross errors in x (e.g., typographical errors) since this will likely make the corresponding responses appear to be gross outliers; these points are likely to have high leverage (see hat). If the x matrix is not known with certainty (an "errors-in-variables" model), the regression coefficients will typically be biased downward.

The classical use of a weighted regression is to handle the case when the variability of the response is not the same for all observations. Another approach to this same problem is to transform y and/or the variables in x so that there is constant variance and linearity holds. In practice it is often the case that a transformation which helps linearity also improves problems with the variance. If a choice is to be made, the linearity is more important since a weighted regression can be used.

It is good data analysis practice to view plots to check the suitability of a solution. Appropriate plots include the residuals versus the fit, the residuals versus the x variables, and a qqplot of the residuals.

Polynomial regression can be performed with lsfit by using a command similar to cbind(x, x^2). It is better numerical practice to create orthogonal polynomials, especially as the order of the polynomial increases. When orthogonal polynomials are not used, the columns of the x matrix can be quite collinear (one column is close to being a linear combination of other columns). Collinearity outside of the polynomial regression case can cloud interpretation of the results as well as being a numerical concern.


REFERENCES:
Belsley, D. A., Kuh, E. and Welsch, R. E. (1980). Regression Diagnostics. Wiley, New York.

Draper, N. R. and Smith, H. (1981). Applied Regression Analysis. (second edition). Wiley, New York.

Myers, R. H. (1986). Classical and Modern Regression with Applications. Duxbury, Boston.

Rousseeuw, P. J. and Leroy, A. (1987). Robust Regression and Outlier Detection. Wiley, New York.

Seber, G. A. F. (1977). Linear Regression Analysis. Wiley, New York.

Weisberg, S. (1985). Applied Linear Regression. Second Edition. Wiley, New York.

There is a vast literature on regression, the references above are just a small sample of what is available. The book by Myers is an introductory text that includes a discussion of much of the recent advances in regression technology. The Seber book is at a higher mathematical level and covers much of the classical theory of least squares.


SEE ALSO:
lm , ls.print , ls.diag , leaps for selecting subsets of the explanatory variables, hat for leverage, qr , qr.coef , l1fit , rreg , lmsreg , glim , abline , ace , avas , ppreg , cancor .

EXAMPLES:
y.abc <- lsfit(cbind(a, b, c), y) # regress y on a, b, and c with intercept
ls.print(y.abc)