Page 42 - Video Coding for Mobile Communications Efficiency, Complexity, and Resilience
P. 42
Section 2.5. Video Coding Basics 19
The quantizer reduces the accuracy of the mapper’s output, according to
some delity criterion, in an attempt to reduce psychovisual redundancy. This
is a many-to-one mapping and is, therefore, irreversible.
The symbol encoder (or codeword assigner) assigns a codeword, a string
of binary bits, to each symbol at the output of the quantizer. The code must
be designed to reduce coding redundancy. This operation is reversible.
In general, compression methods can be classi ed into lossless methods
and lossy methods. In lossless methods the reconstructed (compressed-
decompressed) data is identical to the original data. This means that such
methods do not employ a quantizer. Lossless methods are also known as bit-
preserving or reversible methods. In lossy methods the reconstructed data is
not identical to the original data; that is, there is loss of information due to the
quantization process. Such methods are therefore irreversible, and they usually
achieve higher compression than lossless methods.
2.5.3 Elements of Information Theory
A source S with an alphabet A can be de ned as a discrete random pro-
cess S = S 1 ;S 2 ;:::; where each random variable S i takes a value from the
alphabet A.
In a discrete memoryless source (DMS) the successive symbols of the
source are statistically independent. Such a source can be completely
de ned by its alphabet A = {a 1 ;a 2 ;:::;a N } and the associated probabilities
P = {p(a 1 );p(a 2 );:::;p(a N )}, where N p(a i ) = 1. According to informa-
i=1
tion theory, the information I contained in a symbol a i is given by
1
I(a i ) = log 2 p(a i ) = − log p(a i ) (bits); (2.3)
2
and the average information per source symbol H(S), also known as the
entropy of the source, is given by
N
N
H(S)= p(a i )I(a i )= − p(a i ) log p(a i ) (bits=symbol): (2.4)
2
i=1 i=1
A more realistic approach is to model sources using Markov-K random
processes. In this case the probability of occurrence of a symbol depends on
the values of the K preceding symbols. Thus, a Markov-K source can be
speci ed by the conditional probabilities p(S j = a i |S j−1 ;:::;S j−K ), for all j,
a i ∈ A. In this case, the entropy is given by
H(S)= p(S j−1 ;:::;S j−K )H(S|S j−1 ;:::;S j−K ); (2.5)
S K