Page 369 - Probability and Statistical Inference
P. 369
346 7. Point Estimation
to find See the Section 1.6 for some review. Sometimes we take the natural
logarithm of L(θθ θθ θ) first, and then maximize the logarithm instead to obtain
There are situations where one finds a unique solution or situations
where we find more than one solution which globally maximize L(θθ θθ θ). In
some situations there may not be any solution which will globally maximize
L(θθ θθ θ). But, from our discourse it will become clear that quite often a unique
MLE of θθ θθ θ will exist and we would be able to find it explicitly. In the sequel,
we temporarily write c throughout to denote a generic constant which does
not depend upon the unknown parameter θθ θθ θ.
2
Example 7.2.6 Suppose that X , ..., X are iid N(µ, σ ) where µ is un-
1
n
χ
known but σ is known. Here we have −∞ < µ < ∞, 0 < σ < ∞ and = ℜ. The
2
likelihood function is given by
which is to be maximized with respect to µ. This is equivalent to maximizing
logL(µ) with respect to µ. Now, we have
and hence,
Next, equate logL(µ) to zero and solve for µ. But,
logL(µ) = 0 implies that and so we would say that At this
step, our only concern should be to decide whether really maximizes
logL(µ). Towards that end, observe that which is
negative, and this shows that L(µ) is globally maximized at Thus the
MLE for µ is , the sample mean.
Suppose that we had observed the following set of data from a normal
population: x = 11.4058, x = 9.7311, x = ∞.2280, x = 8.5678 and x =
4
3
1
2
5
8.6006 with n = 5 and σ = 1.
Figure 7.2.1. Likelihood Function L(µ) When the Mean µ
Varies from 2.5 - 19.7.