Find Local Minimum of a Nonlinear Function

DESCRIPTION:
Finds a local minimum of a nonlinear function using a general quasi-Newton optimizer.

USAGE:
nlmin(f, x, d=rep(1,length(x)), print.level=0, max.fcal=30,
      max.iter=15, init.step=1, rfc.tol=<<see below>>, ckfc=0.1,
      xc.tol=<<see below>>, xf.tol=<<see below>>)

REQUIRED ARGUMENTS:
f:
an S-PLUS function which takes as its only argument a real vector of length p. The function must return a value of storage mode "double".
x:
a vector of length p used as the starting point for the optimizer.

OPTIONAL ARGUMENTS:
d:
a vector of length p used as a scaling vector for x. The elements of the vector d*x should be in comparable units. Normally d is only specified if the elements of x have differing orders of magnitude.
print.level=:
if print.level=0, then all output from the optimizer is suppressed. If print.level=1, then a short summary of each iteration is printed. If print.level=2, then a long summary of each iteration is printed.
max.fcal=:
the maximum number of function evaluations permitted.
max.iter=:
the maximum number of iterations permitted.
init.step=:
a bound on the L2-norm of the initial scaled step. This can have a dramatic effect on the performance of the optimizer.
rfc.tol=:
the relative function convergence tolerance. If the predicted function reduction is no more than rfc.tol times |f|, where |f| is the function value, then the optimizer has converged. The default is max(10^-10, e^(2/3)), where e is machine epsilon.
ckfc=:
if the predicted function decrease is smaller than ckfc times the actual function decrease, then a check is performed for false convergence.
xc.tol=:
the relative x-convergence tolerance, where x is the argument of the function being minimized. If a Newton step has a scaled step length smaller than xc.tol, then the optimizer converges. The default is sqrt(e) where e is machine epsilon.
xf.tol=:
the false convergence tolerance. If relative function convergence and relative x-convergence were not achieved, and a Newton step has scaled step length smaller than xf.tol, then the optimizer appears to be stopped at a non-critical solution. The default is 100 times the machine epsilon.

VALUE:
a list with the following components:
x:
the value at which the optimizer has converged
converged:
if the optimizer has apparently successfully converged to a minimum, then converged is TRUE; otherwise converged is FALSE.
conv.type:
a character string describing the type of convergence.

DETAILS:
The optimizer is based on a quasi-Newton method using the double dogleg step with the BFGS secant update to the Hessian. See Dennis, Gay, and Welsch (1981) and Dennis and Mei (1979) for details.

BUGS:
If the storage mode of the return value of f is "single" or "integer", nlmin will not work properly (but converged will be FALSE).

REFERENCES:
Dennis, J. E., Gay, D. M., and Welsch, R. E. (1981). An adaptive nonlinear least-squares algorithm. ACM Transactions on Mathematical Software 7, 348-383.

Dennis, J. E. and Mei, H. H. W. (1979). Two new unconstrained optimization algorithms which use function and gradient values. Journal of Optimization Theory and Applications 28, 453-483.


SEE ALSO:
nlminb , arima.mle .

EXAMPLES:
# minimize a simple function
func <- function(x) {x^2-2*x+4}
min.func <- nlmin(func,0)

# one way to pass parameters to the function is: function() { co <- c(1, 2) assign("co", co, frame = 1) f1 <- function(x) { co[1] + co[2] * x^2 } nlmin(f1, 10) }