Page 372 - Matrix Analysis & Applied Linear Algebra
P. 372

368              Chapter 5                    Norms, Inner Products, and Orthogonality

                                    Proof.  Observe that the t th  component in F ∗j × F ∗k is

                                                                  tj tk
                                                    [F ∗j × F ∗k ] = ξ ξ  = ξ t(j+k)  =[F ∗j+k ] ,
                                                              t                        t
                                    so that the columns of F have the property that

                                               F ∗j × F ∗k = F ∗j+k  for each  j, k =0, 1,..., (n − 1).
                                                            ˆ
                                    This means that if Fˆ a, Fb, and F(a   b) are expressed as combinations of
                                    columns of F as indicated below,

                                           n−1              n−1                          2n−2

                                                        ˆ
                                      Fˆ a =  α k F ∗k ,  Fb =  β k F ∗k ,  and  F(a   b)=  [a   b] k F ∗k ,
                                           k=0              k=0                          k=0
                                                                  ˆ
                                    then the computation of (Fˆ a)×(Fb)is exactly the same as forming the product
                                    of two polynomials in the sense that
                                                                                               

                                                       n−1          n−1          2n−2  k

                                                 ˆ
                                        (Fˆ a) × (Fb)=     α k F ∗k    β k F ∗k  =      α j β k−j F ∗k
                                                                                                
                                                       k=0          k=0          k=0  j=0
                                                      2n−2

                                                    =     [a   b] k F ∗k = F(a   b).
                                                      k=0
                                        According to the convolution theorem, the convolution of two n × 1vectors
                                    can be computed by executing three discrete Fourier transforms of order 2n

                                                                                    ˆ
                                                     a n×1   b n×1 = F −1   (F 2n ˆ a) × (F 2n b) .  (5.8.13)
                                                                     2n
                                    The fact that one of them is an inverse transform is not a source of difficulty—
                                    recall Example 5.8.2. But it is still not clear that much has been accomplished.
                                    Performing a convolution by following the recipe called for in definition (5.8.10)
                                             2
                                    requires n scalar multiplications (you are asked to verify this in the exercises).
                                    Performing a discrete Fourier transform of order 2n by standard matrix–vector
                                                          2
                                    multiplication requires 4n scalar multiplications, so using matrix–vector multi-
                                    plication to perform the computations on the right-hand side of (5.8.13) requires
                                    at least 12 times the number of scalar multiplications demanded by the definition
                                    of convolution. So, if there is an advantage to be gained by using the convolution
                                    theorem, then it is necessary to be able to perform a discrete Fourier transform
                                    in far fewer scalar multiplications than that required by standard matrix–vector
                                    multiplication. It was not until 1965 that this hurdle was overcome. Two Ameri-
                                    cans, J. W. Cooley and J. W. Tukey, introduced a fast Fourier transform (FFT)
                                    algorithm that requires only on the order of (n/2) log n scalar multiplications
                                                                                   2
                                    to compute F n x. Using the FFT together with the convolution theorem requires
   367   368   369   370   371   372   373   374   375   376   377