Page 371 - Probability and Statistical Inference
P. 371
348 7. Point Estimation
The matrix G would be n.d. if and only if its odd order principal minors are
negative and all even order principal minors are positive. Refer to (4.8.6) as
needed. In this case, the first diagonal is -nu which is negative and det(G) =
-1
2
2 -3
½n u which is positive. In other words, G is a n.d. matrix. Thus, L(µ, s ) is
2
globally maximized at That is, the MLE of µ and σ are re-
spectively and !
Next we include few examples to highlight the point that L(θ) may not be
a differentiable function of θ where L(θ) attains the global maximum. In such
situations, the process of finding the MLE turns out little different on a case
by case basis.
Example 7.2.88 88 8 Suppose that we have a single observation X which is
distributed as Bernoulli(p) where 0 ≤ p ≤ 1 is the unknown parameter. Here,
we have
Whether we observe x = 0 or 1, the resulting likelihood function L(p) is not
differentiable at the end points. But, by simply drawing a picture of L(p) one
can verify that (i) when x = 0 then L(p) is maximized if p is the smallest, that
is if p = 0, and (ii) when x = 1 then L(p) is maximized if p is the largest, that
is if p = 1. Hence the MLE of p is when Θ = [0, 1].
But, if the parameter space happens to be Θ = [1/3, 2/3] instead, then
what will be the MLE of p? Again, L(p) is maximized at the end points
where L(p) is not differentiable. By examining the simple picture of L(p)
in the Figure 7.2.2, it becomes clear in this situation that (i) when x = 0,
L(p) is maximized if p is the smallest, that is if p = 1/3, and (ii) when x =
1, L(p) is maximized if p is the largest, that is if p = 2/3. Hence the MLE of
p is if the parameter space happens to be Θ = [1/3, 2/3]. !
Example 7.2.9 Suppose that X , ..., X are iid Uniform(0, θ) where 0
1 n
χ
< θ < ∞ is the unknown parameter. Here = (0, θ) and Θ = ℜ . We
+