Answer:
x1=c(16.7,17.4,18.4,16.8,18.9,17.1,17.3,18.2,21.3,21.2,20.7,18.5)
x2=c(30,42,47,47,43,41,48,44,43,50,56,60)
y=c(210,110,103,103,91,76,73,70,68,53,45,31)
mod=lm(y~x1+x2)
summary(mod)
R output: Call:
lm(formula = y ~ x1 + x2)
Residuals:
Min 1Q Median 3Q Max
-41.730 -12.174 0.791 12.374 40.093
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 415.113 82.517 5.031 0.000709 ***
x1 -6.593 4.859 -1.357 0.207913
x2 -4.504 1.071 -4.204 0.002292 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 24.45 on 9 degrees of freedom
Multiple R-squared: 0.768, Adjusted R-squared: 0.7164
F-statistic: 14.9 on 2 and 9 DF, p-value: 0.001395
a). y=415.113 +(-6.593)x1 +(-4.504)x2
b). s=24.45
c). y =415.113 +(-6.593)*21.3 +(-4.504)*43 =81.0101
residual =68-81.0101 = -13.0101
d). F=14.9
P=0.0014
There is convincing evidence at least one of the explanatory variables is significant predictor of the response.
e). newdata=data.frame(x1=21.3, x2=43)
# confidence interval
predict(mod, newdata, interval="confidence")
#prediction interval
predict(mod, newdata, interval="predict")
confidence interval
> predict(mod, newdata, interval="confidence",level=.95)
fit lwr upr
1 81.03364 43.52379 118.5435
95% CI = (43.52, 118.54)
f). #prediction interval
> predict(mod, newdata, interval="predict",level=.95)
fit lwr upr
1 81.03364 14.19586 147.8714
95% PI=(14.20, 147.87)
g). No, there is not evidence this factor is significant. It should be dropped from the model.