Page 69 - Big Data Analytics for Intelligent Healthcare Management
P. 69

62      CHAPTER 4 TRANSFER LEARNING AND SUPERVISED CLASSIFIER





                                                                                         Class 1
                                                                                         Class 2
                Image
                Input
                      Convolution + Relu  Pooling  Convolution + Relu  Pooling           Class n

                                                                     Flatten  Fully  Softmax
                                                                           connected


                                       Feature learning                     Classification

             FIG. 4.2
             Sample convolution network architecture.
                                             Data from Mathworks.com, Convolutional Neural Network, 2018. Available from:
                          https://www.mathworks.com/solutions/deep-learning/convolutional-neural-network.html. Accessed 10 June 2018.
             In this work, four pretrained ConvNets architectures were used: ResNet50 [27], Inception V3 [28],
             InceptionResnetV2[29], and Xception [30] with their default parameter settings with an average pool-
             ing implemented in keras [31] deep learning library.
             4.3.1.1 Transfer learning and convolution networks
             Transfer learning is a machine learning method that allows the use of a model trained on a task to per-
             form another task. Since the convolution architectures released by different organizations trained on
             ImageNet databases containing 1.2 million images from 1000 categories is very large, training these
             types of architectures for custom datasets is not practical because datasets are not large enough in prac-
             tice. So instead of training the whole network, these pretrained networks are used. There are ways of
             using a pretrained a convolution network that is, doing transfer learning using convolution network.
             One of them is using the pretrained convolution network as fixed feature extractors [32].

             4.3.1.2 Convolution networks as fixed feature extractors
             To use a convolution network as a feature extractor, remove the last fully connected layer of a pre-
             trained convolution network and then use the rest of the architecture as a fixed feature extractor for
             the custom dataset. Then the extracted features can be used for other purposes [32] (Fig. 4.3).

             4.3.1.3 Dimensionality reduction and principle component analysis (PCA)
             It is difficult to train a learning algorithm with a higher dimensional data. Here comes the importance of
             dimension reduction. Dimensionality reduction is a method of reducing the original dimension of data
             to a lower dimension without much loss of information. Dimension reduction techniques have two
             components. One is feature selection and the other is feature extraction. Feature selection is responsible
             for selecting the subset of original attributes with specified parameters and feature extraction is respon-
             sible for projecting the data into a lower dimensional space that is forming a new dataset with selected
             attributes [33]. PCA is one of the popular dimension reduction algorithms that uses the orthogonal
   64   65   66   67   68   69   70   71   72   73   74