Page 224 -
P. 224

Section 6.6  Notes  192


                            Texture Synthesis
                            Texture synthesis exhausted us long before we could exhaust it. Patch based texture
                            synthesis is due to Efros and Freeman (2001); this paper hints at a form of con-
                            ditional texture synthesis. Hertzmann et al. (2001) demonstrate that conditional
                            texture synthesis can do the most amazing tricks. Vivek Kwatra and Li-Yi Wei
                            organized an excellent course on texture synthesis at SIGGRAPH 2007; the notes
                            are at http://www.cs.unc.edu/ ~ kwatra/SIG07_TextureSynthesis/index.htm.

                            Denoising
                            Early work on image denoising relied on various smoothness assumptions—such as
                            Gaussian smoothing, anisotropic filtering (Perona and Malik 1990c), total varia-
                            tion (Rudin et al. 2004), or image decompositions on fixed bases such as wavelets
                            (Donoho & Johnstone 1995; Mallat 1999), for example. More recent approaches
                            include non-local means filtering (Buades et al. 2005), which exploits image self-
                            similarities, learned sparse models (Elad & Aharon 2006; Mairal et al. 2009), Gaus-
                            sian scale mixtures (Portilla et al. 2003), fields of experts (Agarwal and Roth May
                            2002), and block matching with 3D filtering (BM3D) (Dabov et al. 2007). The idea
                            of using self-similarities as a prior for natural images exploited by the non-local
                            means approach of Buades et al. (2005) has in fact appeared in the literature in
                            various guises and under different equivalent interpretations, e.g., kernel density es-
                            timation (Efros and Leung 1999), Nadaraya-Watson estimators (Buades et al. 2005),
                            mean-shift iterations (Awate and Whitaker 2006), diffusion processes on graphs
                            (Szlam et al. 2007), and long-range random fields (Li and Huttenlocher 2008).
                            We have restricted our discussion of sparsity-inducing regularizers to the   1 norm
                            here, but the   0 pseudo-norm, which counts the number of nonzero coefficients
                            in the code associated with a noisy signal can be used as well. Chapter 22 dis-
                            cusses   0 -regularized sparse coding and dictionary learning in some detail. Let
                            us just note here that simultaneous sparse coding is also relevant in that case,
                            the   1,2 norm being replaced by the   0,∞ pseudo-norm, which directly counts the
                            number of nonzero rows. See (Mairal et al. 2009) for details. An implemen-
                            tation of non-local means is available at: http://www.ipol.im/pub/algo/bcm_
                            non_local_means_denoising/, and BM3D is available at http://www.cs.tut.
                            fi/ ~ foi/GCF-BM3D/. An implementation of LSSC is available at http://www.di.
                            ens.fr/ ~ mairal/denoise_ICCV09.tar.gz.

                            Shape from Texture

                            We have assumed that textures are albedo marks on smooth surfaces. This really
                            isn’t true, as van Ginneken et al. (1999) point out; an immense number of textures
                            are caused by indentations on surfaces (the bark on a tree, for example, where the
                            main texture effect seems to be dark shadows in the grooves of the bark), or by
                            elements suspended in space (the leaves of a tree, say). Such textures still give
                            us a sense of shape—for example, in Figure 6.1, one has a sense of the free space
                            in the picture where one could move. The resulting changes in appearance as the
                            illumination and view directions change are complicated (Dana et al. 1999, Lu et
                            al. 1999, Lu et al. 1998, Pont and Koenderink 2002). We don’t discuss this case
   219   220   221   222   223   224   225   226   227   228   229