Page 33 - Computational Statistics Handbook with MATLAB
P. 33
Chapter 2: Probability Concepts 19
Note that if events E and F are independent, then the Multiplication Rule
in Equation 2.6 becomes
(
PE ∩ F) = PF()PE() ,
which means that we simply multiply the individual probabilities for each
event together. This can be extended to k events to give
k
()
PE 1 ∩( E 2 ∩ … ∩ E k ) = ∏ PE i , (2.8)
i = 1
(for all i and j, i ≠
where events E i and E j j ) are independent.
h
Bayeye s sT Th eoor reemm
Ba
e
ThTh
eeoorr eemm
BBaayeye
ss
Sometimes we start an analysis with an initial degree of belief that an event
will occur. Later on, we might obtain some additional information about the
event that would change our belief about the probability that the event will
occur. The initial probability is called a prior probability. Using the new
information, we can update the prior probability using Bayes’ Theorem to
obtain the posterior probability.
The experiment of recording piston ring failure in compressors is an exam-
ple of where Bayes’ Theorem might be used, and we derive Bayes’ Theorem
using this example. Suppose our piston rings are purchased from two manu-
facturers: 60% from manufacturer A and 40% from manufacturer B.
Let M A denote the event that a part comes from manufacturer A, and M B
represent the event that a piston ring comes from manufacturer B. If we select
a part at random from our supply of piston rings, we would assign probabil-
ities to these events as follows:
(
PM A ) = 0.6,
(
PM B ) = 0.4.
These are our prior probabilities that the piston rings are from the individual
manufacturers.
Say we are interested in knowing the probability that a piston ring that sub-
sequently failed came from manufacturer A. This would be the posterior
probability that it came from manufacturer A, given that the piston ring
failed. The additional information we have about the piston ring is that it
failed, and we use this to update our degree of belief that it came from man-
ufacturer A.
© 2002 by Chapman & Hall/CRC