Page 249 - Classification Parameter Estimation & State Estimation An Engg Approach Using MATLAB
P. 249
238 UNSUPERVISED LEARNING
x
Substituting x n;k for x n;k and setting the derivative with respect to k to
zero, yields:
N S
X
x x n;k ¼ k ð7:25Þ
n¼1
Summing equation (7.25) over all clusters, we get:
K N S K
X X X
x x n;k ¼ k ¼ ð7:26Þ
k¼1 n¼1 k¼1
Further note, that summing P K P N S x x n,k over all clusters gives the
k¼1
n¼1
total number of objects N S , thus it follows that ¼ N S . By substituting
this result back into (7.25), the update rule for k becomes:
N S
1 X
^ k ¼ x x n;k ð7:27Þ
N S
n¼1
Note that with (7.27), the determination of m and C k in (7.21) and
k
(7.23) can be simplified to:
N S
1 X
^ m m ¼ x x n;k z n
k
N S ^ k
n¼1
ð7:28Þ
1 X
N S
m
^ T
m
C C k ¼ x x n;k ðz n ^ m Þðz n ^ m Þ
k
k
N S ^ k
n¼1
The complete EM algorithm for fitting a mixture of Gaussian model to a
data set is as follows.
Algorithm 7.4: EM algorithm for estimating a mixture of Gaussians
Input: The number K of mixing components and the data z n .
(0)
1. Initialization: Select randomly (or heuristically) C . Set i ¼ 0.
2. Expectation step (E step): Using the observed data set z n and the
(i)
x
estimated parameters C , calculate the expectations x n of the miss-
ing values x n using (7.19).