Page 81 - Electrical Engineering Dictionary
P. 81
where r(F 2 ,φ) is the Bayes risk function Bayesian estimation an estimation
evaluated with the prior distribution of the scheme in which the parameter to be esti-
parameter 2 and decision rule φ. mated is modeled as a random variable with
known probability density function. See
Bayes risk function with respect to a prior Bayesian estimator.
distribution of a parameter 2 and a decision
rule φ, the expected value of the loss function Bayesian estimator an estimator of a
with respect to the prior distribution of the given parameter 2, where it is assumed that
parameter and the observation X. 2 has a known distribution function and a
related random variable X that is called the
Z Z
r(F 2 ,φ) = L[θ, φ(x)] observation. X and 2 are related by a con-
2 X ditional distribution function of X given 2.
f X|2 (x|θ)f |2 (θ) dx dθ. With P(X|2) and P(2) known, an estimate
of 2 is made based on an observation of X.
the loss function is the penalty incurred for
P(2) is known as the a priori distribution of
estimating the parameter 2 incorrectly. The
2.
decision rule φ(x) is the estimated value of
the parameter based on the measured obser-
Bayesian mean square estimator for a
vation x.
random variable X and an observation Y, the
random variable
Bayes’ rule Bayes’ rule relates the con-
ditional probability of an event A given B X = E[X|Y],
ˆ
and the conditional probability of the event
B given A: where the joint density function f XY (x, y)
is known. See also mean-square estimation,
P(B|A)P(A)
P(A|B) = . linear least squares estimator.
P(B)
Bayesian reconstruction an algorithm in
which an image u is to be reconstructed from
Bayesian classifier a Bayesian classifier
a noise-corrupted and blurred version v.
is a function of a realization of an observed
random vector X and returns a classification
v = f(Hu) + η.
w. The set of possible classes is finite. A
Bayesian classifier requires the conditional A prior distribution p(u|v) of the original im-
distribution function of X given w and the age is assumed to be known. The equation
prior probabilities of each class. A Bayesian T −1
classifier returns the w i such that P(w i |X) is ˆ u = µ u + R u H DR η [v − f(H ˆu)],
maximized. By Bayes’ rule
where R u is the covariance of the image u,
P(w i |X) = fracP (X|w i )P(w i )P(X). R η is the covariance of the noise η, and D is
the diagonal matrix of partial derivatives of
Since P(X) is the same for all classes, it f evaluated at ˆu. An initial point is chosen
can be ignored and the w i that maximizes and a gradient descent algorithm is used to
P(X|w i )P (w i ) is returned as the classifica- find the closest ˆu that minimizes the error.
tion. Simulated annealing is often used to avoid
local minima.
Bayesian detector a detector that min-
imizes the average of the false-alarm and Bayesian theory theory based on Bayes’
miss probabilities, weighted with respect rule, which allows one to relate the a priori
to prior probabilities of signal-absent and and a posteriori probabilities. If P(c i ) is the
signal-present conditions. a priori probability that a pattern belongs to
c
2000 by CRC Press LLC