Page 113 -
P. 113

92                                                                        3 Image processing


                                   Another commonly used dyadic (two-input) operator is the linear blend operator,

                                                       g(x)=(1 − α)f 0 (x)+ αf 1 (x).                 (3.6)

                                By varying α from 0 → 1, this operator can be used to perform a temporal cross-dissolve
                                between two images or videos, as seen in slide shows and film production, or as a component
                                of image morphing algorithms (Section 3.6.3).
                                   One highly used non-linear transform that is often applied to images before further pro-
                                cessing is gamma correction, which is used to remove the non-linear mapping between input
                                radiance and quantized pixel values (Section 2.3.2). To invert the gamma mapping applied
                                by the sensor, we can use
                                                             g(x)=[f(x)] 1/γ  ,                       (3.7)

                                where a gamma value of γ ≈ 2.2 is a reasonable fit for most digital cameras.

                                3.1.2 Color transforms

                                While color images can be treated as arbitrary vector-valued functions or collections of inde-
                                pendent bands, it usually makes sense to think about them as highly correlated signals with
                                strong connections to the image formation process (Section 2.2), sensor design (Section 2.3),
                                and human perception (Section 2.3.2). Consider, for example, brightening a picture by adding
                                a constant value to all three channels, as shown in Figure 3.2b. Can you tell if this achieves the
                                desired effect of making the image look brighter? Can you see any undesirable side-effects
                                or artifacts?
                                   In fact, adding the same value to each color channel not only increases the apparent in-
                                tensity of each pixel, it can also affect the pixel’s hue and saturation. How can we define and
                                manipulate such quantities in order to achieve the desired perceptual effects?
                                   As discussed in Section 2.3.2, chromaticity coordinates (2.104) or even simpler color ra-
                                tios (2.116) can first be computed and then used after manipulating (e.g., brightening) the
                                luminance Y to re-compute a valid RGB image with the same hue and saturation. Figure
                                2.32g–i shows some color ratio images multiplied by the middle gray value for better visual-
                                ization.
                                   Similarly, color balancing (e.g., to compensate for incandescent lighting) can be per-
                                formed either by multiplying each channel with a different scale factor or by the more com-
                                plex process of mapping to XYZ color space, changing the nominal white point, and mapping
                                back to RGB, which can be written down using a linear 3 × 3 color twist transform matrix.
                                Exercises 2.9 and 3.1 have you explore some of these issues.
                                   Another fun project, best attempted after you have mastered the rest of the material in
                                this chapter, is to take a picture with a rainbow in it and enhance the strength of the rainbow
                                (Exercise 3.29).


                                3.1.3 Compositing and matting

                                In many photo editing and visual effects applications, it is often desirable to cut a foreground
                                object out of one scene and put it on top of a different background (Figure 3.4). The process
                                of extracting the object from the original image is often called matting (Smith and Blinn
   108   109   110   111   112   113   114   115   116   117   118