Page 259 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 259

4. Learning in Nonstationary Environments   251




                  are naturally subject to aging and the environment evolves per se, for example, think
                  of seasonality, dayenight cycles, or unpredictable factors affecting a plant. This
                  problem is rarely addressed in existing CPS or IoT applications mostly due to the
                  intrinsic difficulties and challenges that learning in nonstationary environments
                  pose. In extreme cases where the change intensity impairs the performances of
                  the system, designers implement strategies permitting the application to be remotely
                  updated. However, adaptation to the change in stationarity should be anticipated and
                  managed as early as possible in order to meet the expected quality of service.
                     In the literature, the difficult problem of detecting time variance is generally trans-
                  formed into the detection of changes in stationarity, through suitable transformations,
                  with only few studies addressing time variance directly at the acquired datastream
                  level. As such, in the sequel, we focus on the change in stationary problem.
                     The current literature about learning in nonstationary environments either
                  proposes passive strategies, with the application undergoing a continuous adaptation
                  (passive adaptation modality or passive learning), for example, see Refs. [11,12],or
                  active ones, with adaptation activated only once a trigger detects a change (active
                  adaptation modality or active learning) [11,13].
                     Let’s assume that the embedded application can be controlled through a vector of
                  parameters q so that the application code is described by means of the differentiable
                  family function f(x,q), x represents the input vector, for example, containing sensor
                  data. At time t, the vector of parameters q t is associated with an algorithm f(x,q t ).
                  The notation becomes easier to follow if we imagine that a function f(x,q t ) is a linear
                  filter or a neural function. Differentiability is a convenient assumption to ease the
                  presentation here, but it is not strictly requested. In fact, we just require the
                  application to be updated on a demand basis.
                     Every time we are provided with output y t (measurement) in correspondence
                  with input x, the discrepancy L(y t ,f(x,q t )) can be used to update the application.
                  Function L(.,.) is any appropriate figure of merit used to assess such discrepancy,
                  for example, a mean square, a Kullback-Leibler distance, etc. Passive and active
                  approaches differentiate the way to update the application. See the following subsec-
                  tions for more details.

                  4.1 PASSIVE ADAPTATION MODALITY
                  In passive solutions the parameters describing the application are always updated
                  following a compulsive update modality. The classic example is that of linear filters
                  which update online the filter parameters based on the value of the discrepancy
                  L(y t ,f(x,q t ) , mostly based on a least square loss function. More in general, the
                  updated parameters are

                                                     vLðx; qÞ
                                         q tþ1 ¼ q t   g                        (12.2)
                                                       vq
                                                            q t
                  where g is a small and application-dependent scalar value controlling the step taken
                  along the gradient descent direction.
   254   255   256   257   258   259   260   261   262   263   264