tree(formula, data=<<see below>>, weights=<<see below>>, subset=<<see below>>, na.action=na.fail, method="recursive.partition", control=<<see below>>, model=NULL, x=F, y=T, ...)
If the response variable is a factor, the tree is called a classification tree. The model used for classification assumes that the response variable follows a multinomial distribution. weights are not used in the computation of the deviance in classification trees.
If the response variable is numeric, the tree is called a regression tree. The model used for regression assumes that the numeric response variable has a normal (Gaussian) distribution. weights are used if they are specified. See Statistical Models in S for a more detailed discussion of the difference between regression and classification trees.
This function allows up to 128 levels for factor response variables. Factor predictor variables have a limit of 32 levels because if a factor predictor has k levels then the 2^(k-1)-1 splits which must be examined impose severe demands on the system.
The fitted model can be examined by print, summary, and plot. Its contents can be extracted using predict, residuals, deviance, and formula. It can be modified using update. Other generic functions that have methods for tree objects are text, identify, browser, and [.
# fit regression tree to all variables z.solder <- tree(skips ~ ., data = solder.balance)# fit classification tree to data in kyphosis data frame z.kyphosis <- tree(kyphosis)