Nonlinear Least Squares Subject to Box Constraints

DESCRIPTION:
Local minimizer for sums of squares of nonlinear functions subject to bound-constrained parameters.

USAGE:
nlregb(nres, start, residuals, jacobian = NULL, scale = NULL,
       control = NULL, lower = -Inf, upper = Inf, ...)

REQUIRED ARGUMENTS:
nres:
The number of functions whose sum of squares is to be minimized.
start:
p-vector of initial values for the parameters (NAs not allowed).
residuals:
a vector-valued S-PLUS function that returns the vector of values of the functions whose sum of squares is to be minimized. This function must be of the form r(x,<additional arguments>), where x is the vector of parameters over which the minimization takes place. Users can accumulate information through attributes of the value of residuals. If the attributes include any additional arguments of residuals or jacobian, the next call to residuals or jacobian will use the new values of those arguments. The order of the residual functions must be preserved throughout the computation.

OPTIONAL ARGUMENTS:
jacobian:
an S function that returns the n by p Jacobian matrix of the residual functions; that is, the matrix whose rows are the gradients of the individual residual functions. This function must be of the form j(x,<additional arguments>), where x is the vector of parameters over which the optimization takes place. As for residuals, users can accumulate information through attributes of the value of jacobian. It cannot be assumed that the value of x on a given call to jacobian is the same as the value of x used in the previous call to residuals. If jacobian is not supplied, the Jacobian matrix is estimated by finite differences.
scale:
either a single positive value or else a numeric vector with positive components of length equal to the number of parameters to be used to scale the parameter vector. Unless specified by the user, scale is initialized automatically within nlregb. Although scale can have a great effect on the performance of the algorithm, it is not known how to choose it optimally. Automatic updating of the scale vector is the default, although other options can be selected through control.
control:
a list of parameters by which the user can control various aspects of the minimization. For details, see the help file for nlregb.control.
lower, upper:
either a single numeric value or else a vector of length equal to the number of parameters giving lower or upper bounds for the parameter values. The absence of a bound may be indicated by either NA or NULL, or by -Inf and Inf. The default is unconstrained minimization : lower = -Inf, upper = Inf.
...:
additional arguments for residuals and/or jacobian.

VALUE:
returns a list with the following values:
parameters:
final values of the parameters over which the optimization takes place.
objective:
the final value of the objective (sum of squares).
message:
a statement of the reason for termination.
grad.norm:
the final norm of the objective gradient. If there are active bounds, then components corresponding to active bounds are excluded from the norm calculation. If the number of active bounds is equal to the number of parameters, NA will be returned.
iterations:
the total number of iterations before termination.
r.evals:
the total number of residual evaluations before termination.
j.evals:
the total number of jacobian evaluations before termination.
scale:
the final value of the scale vector.
aux:
the final value of the function attributes.
residuals:
final value of the residuals.
jacobian:
final value of the jacobian (if supplied).
call:
a copy of the call to nlregb.

NOTE:
nlregb is intended for functions that have at least two continuous derivatives on all of the feasible region, including the boundary.

For best results, the Jacobian matrix of the residuals should be supplied whenever possible.

Function and derivative values should be computed in C or Fortran within the outer S-PLUS function for greater efficiency.


METHOD:
nlregb is based on the Fortran functions dn2fb, and dn2gb (Dennis et al. (1981), Gay (1984), A T & T (1984)) from NETLIB (Dongarra and Grosse (1987).

REFERENCES:

A. T. & T. Bell Laboratories (1984). PORT Mathematical Subroutine Library Manual.

Dongarra, J. J. and Grosse, E. (1987). Distribution of mathematical software via electronic mail, Communications of the ACM, 30, pp. 403-407.

Dennis, J. E., Gay, D. M., and Welsch, R. E. (1981). An Adaptive Nonlinear Least-Squares Algorithm ACM Transactions on Mathematical Software, 7, pp. 348-368.

Dennis, J. E., Gay, D. M., and Welsch, R. E. (1981). Algorithm 573. NL2SOL - An Adaptive Nonlinear Least-Squares Algorithm. ACM Transactions on Mathematical Software, 7, pp. 369-383.

Gay, D. M. (1984). A trust region approach to linearly constrained optimization. in Numerical Analysis. Proceedings, Dundee 1983, F. A Lootsma (ed.), Springer, Berlin, pp. 171-189.


SEE ALSO:
function , matrix , ms , nlregb.control , nlminb , nls , rep , rnorm , uniroot .

EXAMPLES:
# this example uses nlregb to solve a linear least-squares problem

n <- 20; p <- 5 A <- matrix(rnorm(n*p), nrow = n, ncol = p) x <- rep(1, p) x0 <- rnorm(length(x))

lin.res <- function(x, y, A) {A%*%x-y} nlregb(nres = n, start = x0, res = lin.res, y = A%*%x, A = A)

# now try it with bounds with a solution partially outside the bounds

x <- c(0, -2, 0, 2, 0) nlregb(nres = n, st = x0, res = lin.res, lo = -1, up = 1, y = A%*%x, A = A)

# now use the Jacobian matrix

lin.jac <- function(x, y, A) {A} nlregb(nres = n, st = x0, res = lin.res, jac = lin.jac, lo = -1, up = 1, y = A%*%x, A = A)