t.test(x, y=NULL, alternative="two.sided", mu=0, paired=F, var.equal=T, conf.level=.95)
The alternative hypothesis in each case indicates the direction of divergence of the population mean for x (or difference of means for x and y) from mu (i.e., "greater", "less", "two.sided").
The t-test and the associated confidence interval are quite robust with respect to level toward heavy-tailed non-Gaussian distributions (e.g., data with outliers). However, the t-test is quite non-robust with respect to power, and the confidence interval is quite non-robust with respect to average length, toward these same types of distributions.
The arguments y, paired and var.equal determine the type of test. If y is NULL, a one-sample t-test is carried out with x. Here statistic is given by: t <- (mean(x) - mu) / ( sqrt(var(x)) / sqrt(length(x)) ) If x was drawn from a normal population, t has a t-distribution with length(x) - 1 degrees of freedom under the null hypothesis.
(b) Paired t-Test.
If y is not NULL and paired=TRUE, a paired t-test is performed; here statistic is defined through t <- (mean(d) - mu) / ( sqrt(var(d)) / sqrt(length(d)) ) where d is the vector of differences x - y. Under the null hypothesis, t follows a t-distribution with length(d) - 1 degrees of freedom, assuming normality of the differences d.
(c) Standard Two-Sample t-Test.
If y is not NULL and paired=FALSE, either a standard or Welch modified two-sample t-test is performed, depending on whether var.equal is TRUE or FALSE. For the standard t-test, statistic is t <- (mean(x) - mean(y) - mu) / s1, with s1 <- sp * sqrt(1/nx + 1/ny), sp <- sqrt( ( (nx-1)*var(x) + (ny-1)*var(y) ) / (nx + ny - 2) ), nx <- length(x), ny <- length(y). Assuming that x and y come from normal populations with equal variances, t has a t-distribution with nx + ny - 2 degrees of freedom under the null hypothesis.
(d) Welch Modified Two-Sample t-Test.
If y is not NULL, paired=FALSE and var.equal=FALSE, the Welch modified two-sample t-test is performed. In this case statistic is t <- (mean(x) - mean(y) - mu) / s2 with s2 <- sqrt( var(x)/nx + var(y)/ny ), nx <- length(x), ny <- length(y). If x and y come from normal populations, the distribution of t under the null hypothesis can be approximated by a t-distribution with (non-integral) degrees of freedom 1 / ( (c^2)/(nx-1) + ((1-c)^2)/(ny-1) ) where c <- var(x) / (nx * s2^2).
Hogg, R. V. and Craig, A. T. (1970). Introduction to Mathematical Statistics, 3rd ed. Toronto, Canada: Macmillan.
Mood, A. M., Graybill, F. A. and Boes, D. C. (1974). Introduction to the Theory of Statistics, 3rd ed. New York: McGraw-Hill.
Snedecor, G. W. and Cochran, W. G. (1980). Statistical Methods, 7th ed. Ames, Iowa: Iowa State University Press.
t.test(x) # Two-sided one-sample t-test. The null hypothesis is # that the population mean for 'x' is zero. The # alternative hypothesis states that it is either greater # or less than zero. A confidence interval for the # population mean will be computed. t.test(data.after, data.before, alternative="less", paired=T) # One-sided paired t-test. The null hypothesis is that # the population mean "before" and the one "after" are # the same, or equivalently that the mean change ("after" # minus "before") is zero. The alternative hypothesis is # that the mean "after" is less than the one "before", # or equivalently that the mean change is negative. A # confidence interval for the mean change will be # computed. t.test(x, y, mu=2) # Two-sided standard two-sample t-test. The null # hypothesis is that the population mean for 'x' less # that for 'y' is 2. The alternative hypothesis is that # this difference is not 2. A confidence interval for # the true difference will be computed. t.test(x, y, var.equal=F, conf.level=0.90) # Two-sided Welch modified two-sample t-test. The null # hypothesis is that the population means for 'x' and 'y' # are the same. The alternative hypothesis is that they # are not. The confidence interval for the difference in # true means ('x' minus 'y') will have a confidence level # of 0.90.