Page 125 - Digital Analysis of Remotely Sensed Imagery
P. 125

96    Cha pte r  T h ree


          3.4 Data Compression
               Compression of remotely sensed data is becoming an increasingly
               important issue in digital image processing in light of the emergence of
               hyperspatial and hyperspectral resolution data that are measured easily
               in hundreds of megabytes and more. These bit-mapped images require
               an incredible amount of storage space. For instance, a very small 16-bit
               Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) scene of 512
               by 512 at 224 spectral bands requires over 117Mb of space to store.
               Processing of this small scene requires correspondingly large swap and
               temporary spaces for intermediate results. On the other hand, data
               redundancy in the form of repeated occurrence of the same pixel values
               (e.g., extensive distribution of the same cover on the Earth’s surface) is
               rife in some satellite images. Data redundancy is effectively solved by
               reducing the data volume via data compression, defined as “the process
               of reducing the amount of data required to represent a given quantity
               of information” by Gonzalez and Woods (2002). Data compression not
               only reduces the amount of data that have to be stored and transferred,
               but also speeds up the processing, thus saving time and cost. Data
               compression techniques fall into two broad categories, those that do
               not result in any loss of information (i.e., error free) and those that result
               in partial loss of information. Error-free compression, also known as
               lossless compression, is essential when the compressed image data
               have to be restored to their original state without any loss of information.
               Typically, a compression ratio, defined as the ratio of the number of
               information carrying units in the compressed data to that of the raw
               data, of 2 to 10 can be expected. There are a number of error-free
               compression techniques, including variable-length coding, run-length
               coding, and lossless predictive coding.

               3.4.1 Variable-Length Coding
               The simplest approach toward data reduction is to reduce coding
               redundancy. One way of achieving coding reduction is to assign the
               shortest codes to the most probable sequence of pixel values in the
               input data or the result of a gray level mapping operation (e.g., pixel
               difference, run lengths, and so on) after a variable-length code is
               constructed. A good example of variable-length coding is Huffman
               coding. As the most popular technique, Huffman coding produces
               the smallest possible number of codes from the same source than
               other coding methods. It involves three steps:
                    •  First, all possible pixel values in the input image are identified
                      with their probabilities of occurrence calculated. These
                      probabilities are then ordered in the descending order. The
                      two lowest probability values are combined recursively to
                      form a “compound” value that replaces them in the next
                      round of probability calculation. This process is iterated until
                      only two probabilities are left.
   120   121   122   123   124   125   126   127   128   129   130