To assess the precision of a laboratory scale, we measure a block known to have a mass of 1 gram. we measure the block n times a
nd record the mean x¯ of the measurements. suppose the scale readings are normally distributed with unknown mean µ and standard deviation σ= 0.001 g. how large should n be so that a 95% confidence interval for µ has a margin of error of ± 0.0001
<span>n = 5
The formula for the confidence interval (CI) is
CI = m ± z*d/sqrt(n)
where
CI = confidence interval
m = mean
z = z value in standard normal table for desired confidence
n = number of samples
Since we want a 95% confidence interval, we need to divide that in half to get
95/2 = 47.5
Looking up 0.475 in a standard normal table gives us a z value of 1.96
Since we want the margin of error to be ± 0.0001, we want the expression ± z*d/sqrt(n) to also be ± 0.0001. And to simplify things, we can omit the ± and use the formula
0.0001 = z*d/sqrt(n)
Substitute the value z that we looked up, and get
0.0001 = 1.96*d/sqrt(n)
Substitute the standard deviation that we were given and
0.0001 = 1.96*0.001/sqrt(n)
0.0001 = 0.00196/sqrt(n)
Solve for n
0.0001*sqrt(n) = 0.00196
sqrt(n) = 19.6
n = 4.427188724
Since you can't have a fractional value for n, then n should be at least 5 for a 95% confidence interval that the measured mean is within 0.0001 grams of the correct mass.</span>
X^2+x-30 -- find two numbers that add to one and multiply to -30, which would be 6 and -5. so now you have x^2+6x-5x-30. factor out x and 5 to get x(x+6)-5(x+6). so you get (x+6)*(x-5)