Page 259 - A Course in Linear Algebra with Applications
P. 259

7.4:  The  Method  of  Least  Squares        243

        In  our  original  example,  where  a straight  line was to  be  fitted
        to  the  data,  the  matrix  A  has  two  columns  \a1a2  ...  a m]  and
                                                                       T
        [11 ..  1], while X  =  I  1 and  B  is the column  [bib 2 •  •.  b m] .
            .
        Then  E  is the sum  of the  squares  of the quantities  cai + d — bi.
            A  vector  X  which  minimizes  E  is  called  a  least  squares
        solution  of the linear system  AX  =  B.  A least squares  solution
        will  be  an  actual  solution  of the  system  if and  and  only  if the
        system  is  consistent.
        The  normal    system

             Once  again  consider  a  linear  system  AX  =  B  and  write
                           2
        E  =  \\AX  —  B || .  We  will  show  how  to  minimize  E.  Put
        A  =  [aij]m,n and  let the  entries  of  X  and  B  be  x i , . . . , x n  and
        b\,...,  b m  respectively.  The  ith  entry  of  AX  —  B  is  clearly
               a x
        (Z)"=i ij j)  -  t>i.  Hence


                                                0
                                                        6
                 E=\\AX-B\\    2   = J2    ((E *^-)" *)      2
                                      i=i     j=i
        which  is  a  quadratic  function  of  xi,...,  x n.
             At  this  juncture  it  is  necessary  to  recall  from  calculus
        the  procedure  for  finding  the  absolute  minima  of  a  function
        of  several  variables.  First  one  finds  the  critical  points  of  the
        function  E,  by forming  its partial derivatives and setting  them
        equal to  zero:


                             m       n




        Hence



                          i = l  j = l       i = l
   254   255   256   257   258   259   260   261   262   263   264