a list of the same form as the output.
If this is an acceptable list, this will be the output; otherwise, it is
ignored.
lower:
positive number giving the lower bound for the square root of the uniquenesses.
It is probably a bad idea to make this much smaller than the default.
iter.max:
the maximum number of iterations to perform.
unique.tol:
a positive number giving the tolerance for the change in uniquenesses.
If no uniqueness changes by more than unique.tol from one iteration to
the next, then convergence is declared.
max.step:
the maximum number of times to cut the Newton-Raphson step in half in order
to search for a better solution.
switch.step:
the first iteration on which the algorithm will be forced to try using the
exact Hessian.
tol:
positive number giving the tolerance for the LAPACK eigen value routines.
diag.add:
a positive number used in the rare circumstance that both the approximate
and exact Hessians are not positive definite.
This number is added to each element of the diagonal of the approximate
Hessian.
objective.tol:
positive number giving the tolerance for the relative change in the objective.
If the relative change is less than this from one iteration to the next,
then covergence is declared.
VALUE:
a list with the following components:
lower:
the input or default value of lower.
real.control:
a vector of length 4 which is the real-valued parameters.
These are, in order, tol, diag.add, objective.tol and unique.tol.
integer.control:
length 3 vector of the integer-valued parameters.
The order of these is iter.max, max.step and switch.step.
DETAILS:
An approximate Hessian is used in the beginning because the exact Hessian
may not be positive definite.
The exact Hessian is more expensive to compute, however, it does much better
at finding a good direction near the optimum.
The switch.step argument allows the user to adjust when the algorithm
will try using the exact Hessian.