Page 434 - Biomimetics : Biologically Inspired Technologies
P. 434

Bar-Cohen : Biomimetics: Biologically Inspired Technologies  DK3163_c016 Final Proof page 420 21.9.2005 11:49pm




                    420                                     Biomimetics: Biologically Inspired Technologies

                    Substituting Equations (16.32) and (16.36) into (16.37), the signal «(t) is transformed to

                                                             T
                                              «(t) ¼ u fb (t) þ c(t) j (t) þ k(t)e(t)        (16:39)
                                                               e
                    Using these signals, we have then the following two D.O.F. adaptive control theorem:

                    Theorem: For the force controlled object P(s) with its unknown inverse dynamics given as (16.24)–
                    (16.26), if we adjust the parameter of the feedforward controller (16.28)–(16.30)as

                                          du(t)                T
                                                  ~
                                               ¼ aj(t) u fb (t) þ c(t) j (t) þ k(t)e(t)      (16:40)
                                                  j
                                                                 e
                                           dt
                                                             ~
                                                                                             ~ ~
                    then the force control error e(t) ! 0. In addition, if j(t) satisfies the PE condition such that j(t)j(t) T
                                                                                               j
                                                             j
                                                                                            j
                    be positive, then the feedforward controller Q(u) described by (16.28)–(16.30) tends to P  1
                       The detailed prove of this theorem is given in Muramatsu and Watanabe (2004).
                       The adaptation law (16.40) can be interpreted as a combination of the feedback error learning
                    and the learning control, since u(t) is adjusted by both the feedback input u fb (t) and the feedback
                    error e(t). In addition, j (t) is also generated by e(t).
                                       e
                       Note that the convergence of Q(u) ! P  1  means that we can realize the time response f(t) exactly
                    as the desired f d (t) without any feedback loop delay. However, in order to realize this convergence, it
                    is necessary for the desired f d (t) to satisfy the PE condition during adaptation process.
                       The convergence of adaptation can be increased further by the following modification:
                                            d u(t)             dG
                                                                       ~~ T
                                                    ~
                                                                       j
                                                                        j
                                                 ¼ Gjj«(t) and    ¼ Gjj G                    (16:41)
                                             dt                 dt
                    instead of Equation (16.40).
                    16.4.2.2 Application to a Robot’s Force Tracking Control
                    To evaluate the effectiveness of above two D.O.F. adaptive tracking control, we performed
                    computer simulations and robotic experiments.
                       In the simulations, as shown in Figure 16.13, we set the robot’s parameters as m r ¼ 1, d r ¼ 2, and
                    k r ¼ 0.5; and the unknown dynamic environmental parameters as m e ¼ 1, d e ¼ 2, and k r ¼ 2,
                    respectively, at the beginning of the simulation. The simulation results are given in Figure 16.14,
                    where Figure 16.14(a) shows the result for the adaptation law (16.40), and (b) is the fast conver-
                    gence result when using (16.41). We change the environmental viscosity from d e ¼ 2 to 0.5 at the
                    simulation time t ¼ 250[s] in Figure 16.14(a) and at t ¼ 15[s] in (b). In order for the feedforward
                    controller (Q(u) in Figure 16.12) to converge to the inverse of the force controlled object

                                                           2
                                                        m e s þ d e s þ k e
                                          P(s) ¼                             ,
                                                (m e þ m r )s þ (d e þ d r )s þ (k e þ k r )
                                                        2
                    we set the desired force f d (t) as noise at the beginning 100[s] and during the time of 250[s] to 350[s]
                    in Figure 16.14(a) and at the beginning 4[s] in (b).
                       In both cases, since the force tracking error converged very fast, it is hard to distinguish between
                    the desired and the reaction forces in these figures. Figure 16.14 also shows that the unknown
                    parameters of the robot and environment are converged to the real parameters, which means that
                    the feedforward compensation realizes the exact inverse of the force controlled object P(s).
                    Therefore, even for the rectangle type of desired forces, the control system can realize exactly
   429   430   431   432   433   434   435   436   437   438   439