Page 32 - Numerical methods for chemical engineering
P. 32

18      1 Linear algebra



                           2
                   allocate N memory locations to store A, and N for b
                   for i = 1, 2,..., N − 1; iterate over columns from left to right
                     if a ii = 0, STOP to avoid division by zero
                     for j = i + 1, i + 2,..., N; iterate over rows below diagonal
                       λ ← a ji /a ii
                       for k = i, i + 1,..., N; iterate in row j from column # i to right
                         a jk ← a jk − λa ik
                       end for k = i, i + 1,..., N
                       b j ← b j − λb i
                     end for j = i + 1, i + 2,..., N
                   end for i = 1, 2 ,..., N − 1

                   The basic computational unit of this algorithm is the scalar operation


                                                d ← a + b × c                         (1.97)

                   that comprises two floating point operations (FLOPs), one a scalar multiplication and
                   the other a scalar addition. The term “floating point operation” originates from the binary
                   system for representing real numbers in digital memory that is described in the supplemental
                   material found at the accompanying website. It is common to describe the computational
                   effort required by an algorithm in terms of the number of FLOPs that it performs during
                   execution.
                     The algorithm for Gaussian elimination contains three nested loops that each run over
                   all or part of the set of indices {1, 2,..., N}. Therefore, the total number of FLOPs is
                                                     3
                                 3
                   proportional to N (the exact number, 2N /3 is computed in the supplemental material).
                   If we increase N by a factor of 10, the amount of work required increases by a factor of
                     3
                   10 = 1000. We see here the major problem with Gaussian elimination; it can become very
                   costly for large systems!
                     Backward substitution follows the algorithm:

                   allocate N memory locations for the components of x
                   for i = N, N − 1,..., 1; iterate over rows from bottom to top
                       sum = 0
                       for j = i + 1, i + 2,..., N; iterate over lower columns
                         sum ← sum + a ij × x j
                       end for j = i + 1, i + 2,..., N
                       x i ← [b i − sum]/a ii
                   end i = N, N − 1,..., 1

                                                                             2
                   As there are only two nested loops, the number of FLOPs scales only as N . Therefore, for
                   large systems, it takes far (N times) longer to transform the system into triangular form by
                                3
                   elimination (2N /3 FLOPs) than it does to solve the triangular system by substitution (N 2
                   FLOPs).
   27   28   29   30   31   32   33   34   35   36   37