Page 365 - Probability and Statistical Inference
P. 365
342 7. Point Estimation
else is called unbiasedness of an estimator and we follow this up with a notion
of the best estimator among all unbiased estimators. Sections 7.4 and 7.5
include several fundamental results, for example, the Rao-Blackwell Theo-
rem, Cramér-Rao inequality, and Lehmann-Scheffé Theorems. These ma-
chineries are useful in finding the best unbiased estimator of θ in different
situations. The Section 7.6 addresses a situation which arises when the Rao-
Blackwellization technique is used but the minimal sufficient statistic is not
complete. In Section 7.7, an attractive large sample criterion called consis-
tency, proposed by Fisher (1922), is discussed.
7.2 Finding Estimators
Consider iid and observable real valued random variables X , ..., X from a
n
1
population with the common pmf or pdf f(x; θθ θθ θ) where the unknown param-
⊆
k
eter θθ θθ θ ∈ Θ ℜ . It is not essential for the Xs to be real valued or iid. But, in
many examples they will be so and hence we assume that the Xs are real
valued and iid unless specified otherwise. As before, we denote X = (X , ...,
1
X ).
n
Definition 7.2.1 An estimator or a point estimator of the unknown pa-
rameter θθ θθ θ is merely a function T = T(X , ..., X ) which is allowed to depend
n
1
only on the observable random variables X , ..., X . That is, once a particular
1
n
data X = x has been observed, the numerical value of T(x) must be comput-
able. We distinguish between T(X) and T(x) by referring to them as an estima-
tor and an estimate of θθ θθ θ respectively.
An arbitrary estimator T of a real valued parameter θ, for example, can be
practically any function which depends on the observable random variables
2
alone. In some problem, we may think of X , S and so
1
on as competing estimators. At this point, the only restriction we have to
watch for is that T must be computable in order to qualify to be called an
estimator. In the following sections, two different methods are provided for
locating competing estimators of θθ θθ θ.
7.2.1 The Method of Moments
During the late nineteenth and early twentieth centuries, Karl Pearson was
the key figure in the major methodological developments in statistics. Dur-
ing his long career, Karl Pearson pioneered on many fronts. He originated
innovative ideas of curve fitting to observational data and did fundamental
research with correlations and causation in a series of multivariate data