Page 109 -
P. 109

90                                     Introduction to Control Theory

            2. f(t, 0)=0 i.e. the origin is an equilibrium point of f(t, x)

            3. || f(t, x 1 )-f(t, x 2 )|| ß 1 ||x 1 -x 2 ||, for some ß 1 >0
            4. ||g(t, x 1 )|| ß 2 r, for some ß 2 >0
            5. ||g(t, x 1 )-g(t, x 2 ) ß 2  ||x 1 -x 2 ||

            6.
            Then, there exists a unique solution x(t) to 1.5.5 and





            The total stability theorem will be used to design controllers that will
            make the linear part of the system exponentially stable. In effect, this
            theorem guarantees that if the linear part of the system is “very” stable
            (exponentially stable), the destabilizing effect of the bounded
            nonlinearities may not be sufficient to destabilize the system and the state
            will remain bounded.


            EXAMPLE 2.10–2: Total Stability
            Consider the nonlinear system




            Let |x 0|<1, and note the following

                                  K=1; a=2; ß 1=0.5; ß 2=1.

            Note first that all conditions of the theorem are satisfied, then there exists a
            unique solution x(t) which is bounded by






            A version of the Bellman-Gronwall lemma is proved in [Sastry and Bodson
            1989] and is presented next.







            Copyright © 2004 by Marcel Dekker, Inc.
   104   105   106   107   108   109   110   111   112   113   114