Page 263 - Applied statistics and probability for engineers
P. 263
Section 7-2/Sampling Distributions and the Central Limit Theorem 241
(
ˆ
ˆ
Θ = h X , X , …, X n ) is called a point estimator of θ. Note that Θ is a random variable because
2
1
ˆ
it is a function of random variables. After the sample has been selected, Θ takes on a particular
∧
numerical value θ called the point estimate of θ.
Point Estimator
ˆ
A point estimate of some population parameter θ is a single numerical value θ of a
ˆ
ˆ
statistic Θ. The statistic Θ is called the point estimator.
As an example, suppose that the random variable X is normally distributed with an unknown
mean μ. The sample mean is a point estimator of the unknown population mean μ. That is,
ˆ μ = X. After the sample has been selected, the numerical value x is the point estimate of μ.
Thus, if x 1 = 25 , x 2 = 30 , x 3 = 29, and x 4 = 31, the point estimate of μ is
25 + 30 + 29 + 31
.
x = = 28 75
4
2
2
Similarly, if the population variance σ is also unknown, a point estimator for σ is the sample
variance S , and the numerical value s = 6 9 calculated from the sample data is called the
.
2
2
2
point estimate of s .
Estimation problems occur frequently in engineering. We often need to estimate
r The mean μ of a single population
2
r The variance σ (or standard deviation σ) of a single population
r The proportion p of items in a population that belong to a class of interest
r The difference in means of two populations, μ − μ 2
1
r The difference in two population proportions, p 1 − p 2
Reasonable point estimates of these parameters are as follows:
r For μ, the estimate is ˆ μ = x, the sample mean.
ˆ
2
2
2
r For σ , the estimate is σ = s , the sample variance.
/
r For p, the estimate is ˆ p = x n, the sample proportion, where x is the number of items in a
random sample of size n that belong to the class of interest.
ˆ
r For μ − μ 2 , the estimate is ˆ μ − μ = x 1 − x , the difference between the sample means of
2
1
2
1
two independent random samples.
r For p 1 − p 2 , the estimate is ˆ p 1 − ˆ p 2 , the difference between two sample proportions com-
puted from two independent random samples.
We may have several different choices for the point estimator of a parameter. For example, if we
wish to estimate the mean of a population, we might consider the sample mean, the sample median,
or perhaps the average of the smallest and largest observations in the sample as point estimators. To
decide which point estimator of a particular parameter is the best one to use, we need to examine
their statistical properties and develop some criteria for comparing estimators.
7-2 Sampling Distributions
and the Central Limit Theorem
Statistical inference is concerned with making decisions about a population based on the informa-
tion contained in a random sample from that population. For instance, we may be interested in the
mean ill volume of a container of soft drink. The mean ill volume in the population is required to
be 300 milliliters. An engineer takes a random sample of 25 containers and computes the sample

