Page 563 - Probability and Statistical Inference
P. 563
540 12. Large-Sample Inference
A1: The expressions
χ
are assumed finite for all x ∈ and for all θ in an interval around the true
unknown value of θ.
A2: Consider the three integrals and
The first two integrals amount to zero whereas the third
integral is positive for the true unknown value of θ.
A3: For every θ in an interval around the true unknown value of θ,
< a(x) such that E [a(X )] < b where b is a constant
θ 1
which is independent of θ.
The assumptions A1-A3 are routinely satisfied by many standard distribu-
tions, for example, binomial, Poisson, normal and exponential. In order to
find the MLE for θ, one frequently takes the derivative of the likelihood func-
tion and then solve the likelihood equation:
One will be tempted to ask: Does this equation necessarily have any solution?
If so, is the solution unique? The assumptions A1-A3 will guarantee that we
can answer both questions in the affirmative. For the record, we state the
following results:
In other words, the MLE of q will stay close to the unknown but true
value of q with high probability when the sample size n is sufficient large.
In other words, a properly normalized version of the MLE of θ will converge
(in distribution) to a standard normal variable when the sample size n is large.
In a variety of situations, the asymptotic variance of the MLE coin-
cides with 1/I (θ). One may recall that 1/I (θ) is the Cramér-Rao lower bound
(CRLB) for the variance of unbiased estimators of θ. What we are claiming
then is this: In many routine problems, the variance of the MLE of θ,
has asymptotically the smallest possible value. This phenomenon was referred
to as the asymptotic efficiency property of the MLE by Fisher (1922,1925a).

