Page 83 - Handbook of Deep Learning in Biomedical Engineering Techniques and Applications
P. 83

Chapter 3 Application, algorithm, tools directly related to deep learning  71




                  There are no connections in intralayer connections such as
               RBM. Hidden units represent a feature that captures the correla-
               tions associated in the data. Two layers are connected by using
               symmetrical weights W. Every unit in each layer is connected to
               every other unit in the each neighboring layer.

               3.1.2 Working of deep belief network
                  Deep belief networks are pretrained by using algorithm called
               Greedy algorithm. This algorithm uses layer-by-layer approach
               for learning all the top-down approach and most important
               generative weights. These associated weights determine how all
               variables in one layer depend on the other variables in the above
               layer [14]. In DBN, we execute several steps of Gibbs sampling on
               the top two hidden layers. This stage is basically drawing a sample
               from the RBM by the two hidden layers at top.
                  Single pass of ancestral sampling is used through the rest of
               the model to draw a sample from the entire visible units.
               Learning the values of the all latent variables in each layer can
               be implicit by a single, bottom-up pass. Greedy pretraining be-
               gins with an observed data vector only in the bottom layer. It
               then affords the generative weights in the reverse direction using
               fine-tuning.

               3.2 Convolutional neural network
                  A CNN is a deep learning algorithm which can acquire input
               image and assign importance in terms of learnable weights and
               biases to various factors in the image, and it is used to distinguish
               one from the other. The preprocessing required in a ConvNet is
               much lower as compared with all other classification algorithms,
               while in primitive methods, filters are hand-on, with enough
               training, and ConvNets have the ability to learn these filters/char-
               acteristics [15]. The process of CNN is depicted in Fig. 3.6 [29].
                  A digital image is comprised of pixels(picture elements) repre-
               sented in matrix form.The image matrix need to be flattened (e.g.,
               3   3 image matrix is mapped into a 9   1 vector) and feed it to a
               multilevel perceptron for classification purposes. The image flat-
               tening is shown in Fig. 3.7.
                  A ConvNet is able to successfully seize the spatial and tempo-
               ral dependencies in an acquired image through the application of
               all the applied relevant filters. The architecture performs a good
               fitting to the image data set due to the reduction in the number
               of parameters and reusability of all associated weights. In other
               words, the network can be trained to train the sophistication of
               the image better [16].
   78   79   80   81   82   83   84   85   86   87   88