Page 53 - Elements of Distribution Theory
P. 53
P1: JZP
052184472Xc02 CUNY148/Severini May 24, 2005 2:29
2
Conditional Distributions and Expectation
2.1 Introduction
Consider an experiment with sample space and let P denote a probability function on
so that a given event A ⊂ has a probability P(A). Now suppose we are told that a certain
event B has occurred. This information affects our probabilities for all other events since
now we should only consider those sample points ω that are in B; hence, the probability
P(A) must be updated to the conditional probability P(A|B). From elementary probability
theory, we know that
P(A ∩ B)
P(A|B) = ,
P(B)
provided that P(B) > 0.
In a similar manner, we can consider conditional probabilities based on random variables.
Let (X, Y) denote a random vector. Then the conditional probability that X ∈ A given Y ∈ B
is given by
Pr(X ∈ A ∩ Y ∈ B)
Pr(X ∈ A|Y ∈ B) =
Pr(Y ∈ B)
provided that Pr(Y ∈ B) > 0.
In this chapter, we extend these ideas in order to define the conditional distribution and
conditional expectation of one random variable given another. Conditioning of this type
represents the introduction of additional information into a probability model and, thus,
plays a central role in many areas of statistics, including estimation theory, prediction, and
the analysis of models for dependent data.
2.2 Marginal Distributions and Independence
Consider a random vector of the form (X, Y), where each of X and Y may be a vector
and suppose that the range of (X, Y)isof the form X × Y so that X ∈ X and Y ∈ Y. The
probability distribution of X when considered alone, called the marginal distribution of X,
is given by
Pr(X ∈ A) = Pr(X ∈ A, Y ∈ Y), A ⊂ X.
Let F denote the distribution function of (X, Y). Then
Pr(X ∈ A) = dF(x, y).
A×Y
39