Page 263 - Digital Analysis of Remotely Sensed Imagery
P. 263

Image Enhancement       225

                            5  5  5  5  5           0  0  0  0  0
                            5  5  5  5  5           0  0  0  0  0
                           17  17  17  17  17      12  12  12  12  12
                           17  17  17  17  17       0  0  0  0  0
                                 (a)                     (b)

               FIGURE 6.15  Principle of edge detection through image self-subtraction.
               (a) Raw image of 4 by 5 pixels showing a horizontally oriented edge; (b) self-
               subtracted image showing the edge after a vertical shift by one pixel
               (e.g., l = 1, m = 0). The fi rst row is a duplication of the second row.


               6.4.1 Enhancement through Subtraction
               Edge enhancement through image self-subtraction is underpinned
               by the fact that nonedge features have a spatially uniform value, in
               sharp contrast to edges that experience a drastic and usually abrupt
               change in pixel value along a certain direction (Fig. 6.15a). A  new
               image is created by duplicating the existing one. If this newly created
               image is subtracted from the source one, then nothing remains in the
               resultant image. No edges are detectable through this subtraction.
               The subtraction can be improved by slightly shifting one of the images
               to the left, or to the right, or above or below, or even diagonally by
               one or two pixels. This operation is mathematically expressed as
                                                    +
                          ΔDN(, ) =  DN(, ) −  DN(i lj m ) b        (6.10)
                                                 +
                                                        +
                                       ij
                               ij
                                                   ,
               where       DN(i, j) = pixel value at location (i, j)
                     DN(i + l, j + m) =  pixel value at location (i + l, j + m)
                                    (l, m = 0, 1, 2, … , the distance of shift)
                                    of the same image
                                 b =  bias to prevent the emergence of negative
                                    differences
                   The above subtraction essentially compares the pixel values of
               the same image at a spatial separation of (l, m).
                   In the difference image, all nonedge pixels have a value of zero,
               in stark contrast to edge pixels, which have a nonzero value (Fig. 6.15b).
               Thus, nonedge features disappear from the difference image, leaving
               only linear features in the difference image. This newly derived layer
               can be added back to the original image to enhance edges. It must be
               noted that only those features perpendicular to the direction of shift
               can be detected in one subtraction. If linear features are oriented in
               multiple directions, several self-subtractions are essential to detect
               all of them. In each subtraction, the duplicated image is shifted in
               one of the four possible directions. All the separately detected edges
               are merged to form one composite image to show all the detected
               edges. As with spatial filtering, the output image may have a dimen-
               sion smaller than that of the input image. This can be restored by
   258   259   260   261   262   263   264   265   266   267   268