Page 32 - Introduction to Information Optics
P. 32
1.3. Communication Channel
average mutual information can therefore be written as
2TAv
I(X- Y) = £ H(B t) - H(Z)
neN
= i (2^4) - TAvlog 2 — ^ , (1.61)
\ '
where n denotes the product ensemble. Since a, and e, are statistically
independent the variance of p(b) is given by
In view of Eq. (1.56), we see that
2TAv 2TAv
I 4= I *i + N 06v ^ S + N, (1.62)
i = 1 i = 1
where N = N 0Av. The equality holds for Eq. (1.62) when the input probability
density distribution p(a) is also Gaussianly distributed with zero mean and a
variance equal to 5. Furthermore, from Eq. (1.62), we can write
where the equality holds if and only if the G^. are all equal and p(n) is
Gaussianly distributed with zero mean and a variance equal to S.
Therefore, the corresponding channel capacity can be written as
Y) i S
C = max ' = Avlog 2 I 1 + — 1 bits/sec (1.63)
T N
where S/N is the signal-to-noise ratio. We note that the preceding result is one
of the most popular equations as derived by Shannon [1.3] and independently
by Wiener. [1.5], for a memoryless additive Gaussian channel. Because of its
conceptual and mathematical simplicity, this equation has been widely used in
practice and has also been occasionally misused. We note that this channel
capacity is derived under the assumption of additive white Gaussian noise
regime, and the average input signal power cannot exceed a specified value of
S. We further stress that the channel capacity equation is obtained under the
assumption that input signal is also Gaussianly distributed with zero mean.