nlminb(start, objective, gradient=NULL, hessian=NULL, scale=1, control=NULL, lower = -Inf, upper = Inf)
For best results, the gradient of the objective should be supplied whenever possible. Supplying the Hessian matrix as well will sometimes, but not always, lead to a substantial decrease in the number of iterations required to reach a minimum. If the Hessian matrix is supplied, it is recommended that it be computed within gradient for greater efficiency.
Function and derivative values should be computed in C or Fortran within the outer S-Plus function for greater efficiency.
Dongarra, J. J. and Grosse, E. (1987). Distribution of mathematical software via electronic mail, Communications of the ACM 30, pp. 403-407.
Gay, D. M. (1983). Algorithm 611. Subroutines for Unconstrained Minimization using a Model/ Trust-Region Approach. ACM Transactions on Mathematical Software, 9, pp. 503-524.
Gay, D. M. (1984). A trust region approach to linearly constrained optimization. in Numerical Analysis. Proceedings, Dundee 1983, F. A Lootsma (ed.), Springer, Berlin, pp. 171-189.
# this example minimizes a sum of squares with known solution ysumsq <- function( x, y) {sum((x-y)^2)}
y <- rep(1,5) x0 <- rnorm(length(y))
nlminb( start = x0, obj = sumsq, y = y)
# now use bounds with a y that has some components outside the bounds
y <- c( 0, 2, 0, -2, 0)
nlminb( start = x0, obj = sumsq, lower = -1, upper = 1, y = y)
# try using the gradient
sumsq.g <- function(x,y) {2*(x-y)}
nlminb( start = x0, obj = sumsq, grad = sumsq.g, lo = -1, up = 1, y = y)
# now use the hessian, too
sumsq.gh <- function(x,y) { n <- length(y) i <- 1:n ii <- (i*(i-1))/2 + i l <- (n*(n+1))/2 list(gradient = 2*(x-y), hessian = replace( rep(0,l), ii, 2)) }
nlminb( st = x0, obj = sumsq, grad = sumsq.gh, hes = T, lo = -1, up = 1, y = y)