Page 240 - Video Coding for Mobile Communications Efficiency, Complexity, and Resilience
P. 240

Section 9.6.  Forward Techniques                              217


            the  bitstream,  the  decoder  discards  all  bits  until  the  next  resynchronization
            codeword, where synchronization is reestablished and the decoder resumes its
            decoding process. The discarded bits may well be correctly received but cannot
            be  decoded  correctly  due  to  loss  of  synchronization.  In  the  case  of  RVLCs,
            when  the  decoder  identi es  the  next  resynchronization  codeword,  instead  of
            discarding  all  preceding  bits,  the  decoder  starts  decoding  in  the  reverse  di-
            rection  to  recover  and  utilize  some  of  those  bits.  This  is  illustrated  in  Figure
            9.3(b).
               Reversible  variable-length  coding  has  been  adopted  in  most  recent  stan-
            dardization e orts. For example, the modi ed unrestricted motion vector mode
            (modi ed  annex  D)  of  H.263+  uses  RVLC  to  encode  motion  vector  di er-
            ences, the data partitioned slice mode (annex V) of H.263++ uses RVLC to
            encode  header  and  motion  information,  and  MPEG-4  uses  RVLC  to  encode
            texture information.


            9.6.4  Layered Coding with Prioritization
            In layered coding, video is encoded into a base layer and one or more enhance-
            ment layers. The base layer is separately decodable and provides a basic level
            of perceived quality. The enhancement layers can be decoded to incrementally
            improve this quality.
               Layered  coding  can  be  useful  when  applied  over  heterogenous  networks
            with  varying  bandwidth  capacity.  However,  to  be  used  as  an  error-resilience
            tool,  layered  coding  must  be  combined  with  prioritized  transmission  or  what
            is  commonly  known  as  unequal  error  protection.  In  this  case,  the  base  layer
            is  transmitted  with  higher  priority  or  a  higher  degree  of  error  protection.  For
            example, in Ref. 186 Ghanbari introduced the concept of layered coding with
            prioritized  transmission  to  increase  the  robustness  of  video  against  cell  loss
            in  ATM  networks.  In  this  technique,  the  encoder  generates  two  bitstreams.
            The  base-layer  bitstream  contains  the  most  vital  video  information,  whereas
            the  enhancement-layer  bitstream  contains  residual  information  to  improve  the
            quality of the base layer. The base layer is then transmitted using high-priority
            ATM  cells,  whereas  the  enhancement  layer  is  transmitted  using  low-priority
            cells.  When  tra$c  congestion  occurs,  low-priority  cells  are  discarded   rst.
            Another  example  is  the  power  control  method  proposed  in  Ref.  187.  In  this
            method,  when  video  is  transmitted  over  a  wireless  network,  more  power  is
            used  to  transmit  the  base  layer,  whereas  less  power  is  used  to  transmit  the
            enhancement layers.
               There are many ways to encode video into more than one layer. For exam-
            ple, the base layer can include a low-frame-rate version of video, whereas the
            enhancement layers can contain frames used to increase the frame rate. This is
            usually  referred  to  as  temporal  scalability.  Another  method  is  when  the  base
   235   236   237   238   239   240   241   242   243   244   245