Page 90 - Handbook of Deep Learning in Biomedical Engineering Techniques and Applications
P. 90

78   Chapter 3 Application, algorithm, tools directly related to deep learning


















          Figure 3.14 Architecture of RNN input and output. RNN, recurrent neural network. From https://commons.
          wikimedia.org/wiki/File:Recurrent_neural_network_unfold.svg.


                                    • Hence, these three layers can be merged together such that the
                                       weights and bias of all the hidden layers are not different.
                                    • Formula for calculating current state is defined in Eq.3.2 [18]:
                                                           h t ¼ f ðh t 1 ; x t Þ          (3.2)

                                    where
                                       h t denotes current state
                                       h t 1 denotes previous state
                                       x t denotes input state
                                    • Formula for applying activation function (tanh)isdefined in
                                       Eq.3.3 [18]:

                                                     h t ¼ tanhðW hh h t 1 þ W xh x t Þ    (3.3)

                                    where
                                       w hh is weight at recurrent neuron
                                       w xh is weight at input neuron
                                       Formula for calculating output is shown in Eq.3.4 [32]

                                                            y t ¼ W hy h t                 (3.4)

                                    where
                                       Y t is output
                                       W hy is weight at output layer
                                       Advantages of RNN:
                                    1. An RNN remembers that each and every information depends
                                       on time. It is useful in time series prediction only because of
                                       the highlight point to remember previous inputs as well. This
                                       is known as long short-term memory.
                                    2. RNNs are often used with convolution layers to elongate the
                                       effective pixel neighborhood.
   85   86   87   88   89   90   91   92   93   94   95