Page 357 - Elements of Distribution Theory
P. 357
P1: JZP
052184472Xc11 CUNY148/Severini May 24, 2005 17:56
11.3 Convergence in Probability 343
it follows that
t
log ϕ n (t) = no → 0as n →∞.
n
D
Hence, by Theorem 11.2, X n → 0as n →∞;itnow follows from Corollary 11.3 that
p
X n → 0as n →∞. This is one version of the weak law of large numbers.
Example 11.16 (Mean of Cauchy random variables). Let Y n , n = 1, 2,... denote a
sequence of independent, identically distributed random variables such that each Y j has
a standard Cauchy distribution; recall that the mean of this distribution does not exist so
that the result in Example 11.15 does not apply.
Let
1
X n = (Y 1 +· · · + Y n ), n = 1, 2,....
n
The characteristic function of the standard Cauchy distribution is exp(−|t|)so that the
characteristic function of X n is given by
n
ϕ n (t) = exp(−|t|/n) = exp(−|t|).
Hence, X n does not converge in probability to 0; in fact, X n also has a standard Cauchy
distribution.
Although convergence in probability of X n to 0 may be established by considering char-
acteristic functions, it is often more convenient to use the connection between probabilities
and expected values provided by Markov’s inequality (Theorem 1.14). Such a result is given
in the following theorem; the proof is left as an exercise.
Theorem 11.8. Let X 1 , X 2 ,... denote a sequence of real-valued random variables. If, for
some r > 0,
r
lim E(|X n | ) = 0
n→∞
p
then X n → 0.
Example 11.17 (Weak law of large numbers). Let Y n , n = 1, 2,... denote a sequence of
2 2
real-valued random variables such that E(Y n ) = 0, n = 1, 2,..., E(Y ) = σ < ∞, n =
n n
1, 2,..., and Cov(Y i , Y j ) = 0 for all i = j.
Let
1
X n = (Y 1 +· · · + Y n ), n = 1, 2,....
n
Then
n
1
2 2
E(X ) = Var(X n ) = σ .
j
n
n 2
j=1

