Page 100 - DSP Integrated Circuits
P. 100

3.14 Adaptive DSP Algorithms                                          85

             range for the adaptation process. However, only about 12 bits of the coefficients
             are used in multiplications within the FIR filter.
                 Several variations on this theme have been proposed and some have even
             been incorporated into commercial modems. The variations try to
                    Simplify the algorithm,
                    Improve the rate at which the tap values converge to their optimum
                    values, and
                    Make the steady-state tap values closer to their ideal values.

                 The work load can be reduced by simplifying the computation of the gradient
             o^y(n) - d(n)]x(n - k) by quantizing one or both of the two last factors to ±1. Fur-
             ther, the forgetting factor a may be selected equal to a power of 2 so that it can be
             realized by a simple shift operation. The penalty for these simplifications is, how-
             ever, a significant decrease in the rate of convergence. It can be shown that a large
             value of a, that depends on the channel will improve the rate of convergence, but
             the algorithm becomes unstable if too large a value is used. The steady-state
                                                      2
             errors in the tap values are proportional to a . Hence, a small value should be
             preferable in order to improve the accuracy of the tap values adjustment. The LMS
             algorithm is appropriate for slowly time-variant channels.
                 More recently, an equalizer that does not require an initial training period has
             become popular. In fact, the algorithm for adjusting the tap values does not
             depend on the received input sequence.

             3.14.2 RLS (Recursive Least Square) Lattice Filters

             Equalizers with short training times are required in many applications. A rapidly
             convergent algorithm is the fast Kalman algorithm [23, 32]. The price for this
             improvement is the increased complexity and sensitivity to round-off errors in the
             computations. The adaptation may also become unstable. The fast Kalman algo-
             rithm requires many more operations then the stochastic-gradient algorithm. The
             work load per input sample is about 1(W multiplications, 9N additions, and 2N
             divisions. Note that division is an expensive (i.e., time-consuming) operation.
             Some adaptive algorithms involve square root operations which are even more
             expensive.
                An important class of adaptive algorithms is the RLS (recursive least square)
                                                                              3
             filters. RLS algorithms are usually realized by using so-called lattice filters . There
             exist many variations of the basic RLS algorithm [33, 37]. Box 3.1 shows the so-
             called "direct update (error-feedback) form of an a priori RLS lattice algorithm."
                 The sequences f m(n) and b m(ri) are the forward and backward error residuals
             while Kf m(ri) and Kb m(n) are the corresponding reflection coefficients. The index m
             refers to the stage of the filter. Typically, the order of the filter, N, is in the range 8
             to 12 for speech coders, but much higher filter orders may be required for channels
             with long delays. Besides its rapid convergence, the lattice filter is relatively
             insensitive to round-off errors incurred in the computations. Hence, the computa-
             tions can be made using a short data word length.



             3
             - This type of lattice filters should not be confused with lattice wave digital niters, which will be
                discussed in Chapter 4.
   95   96   97   98   99   100   101   102   103   104   105