Page 402 - Probability and Statistical Inference
P. 402
7. Point Estimation 379
Khan (1968) considered this estimation problem but he first focussed on
the status of the MLE. Our goal is slightly different. Utilizing the two initial
unbiased estimators T, T′ we focussed on the Rao-Blackwellized versions and
compared their variances. This was not emphasized in Khan (1968). We find
that W′ can be better than W even if n is small (≥ 3) but for large n, the
estimator W′ can have approximately fifty percent less variation than the esti-
mator W.
Khan (1968) proceeded to derive another unbiased estimator for θ which
performed better than both W and W′. The basic idea was simple. Let us look
at the following class of unbiased estimators of θ :
For each α ∈ [0, 1], T* (α) is unbiased for θ and hence the one having the
smallest variance should be more attractive than either W or W′ . Since and
2
S are independent, we have
Now, V (T*) can be minimized directly with respect to the choice of α. The
θ
optimal choice of α is given by
In view of (7.6.3), α* reduces to 1/3 for large n. With α* determined by
(7.6.5), the corresponding unbiased estimator T* (α*) would have its vari-
ance smaller than that of both W and W′ . One will find interesting decision
theoretic considerations in Gleser and Healy (1976). Lehmann (1983, page
89) mentions an unpublished thesis of Unni (1978) which showed that a
UMVUE of θ did not exist in the present situation.
Example 7.6.1 Let us suppose that one has X , ..., X iid with the common
n
1
pdf f(x; θ) = θ exp{−(x−θ)/θ}I(x > θ) with θ ∈ Θ = ℜ where θ is the
−1
+
unknown parameter. Here, we wish to estimate θ unbiasedly. Recall from the
Exercise 6.6.5 that the statistic is not complete
but it is sufficient for θ. One should check into the ramifications of the pre-
ceding discussions in the context of the present problem. We leave this out as
Exercise 7.6.1. !