Page 283 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 283
276 CHAPTER 13 Multiview Learning in Biomedical Applications
representation of input data can be exploited to derive better features to train
learning models. The underlying assumption is that observed data result from the
contribution of multiple factors interacting at different levels. When more than a
view is available, DNNs can be used to learn latent multimodal relationships.
5.1 DEEP LEARNING APPLICATION TO PREDICT PATIENT’S SURVIVAL
The identification of stable and robust survival patients’ subgroups can improve
the ability to predict specific prognosis. Many of the proposed machine-learning
techniques benefit from the availability of multimodality measures. One of the
main problem in data integration is related to the fact that features from different
views might not be directly comparable. Recently, Chaudhary et al. applied deep
learning autoencoders to integrate multimodal omics data, in an early integration
manner, with the purpose of extracting deep meta features to be used in further ana-
lyses [29]. Indeed, an autoencoder is an unsupervised feed-forward neural network
that is able to learn a representation of the data by transforming them by successive
hidden layers [30]. In particular, they performed a study to identify a subgroup of
patients affected by hepatocellular carcinoma (HCC). The analyses were performed
on 360 samples coming from the TCGA website for which RNASeq, miRNASeq,
and DNA methylation data were available. After the preprocessing and normalization
of each single view, they concatenated the data and applied a deep autoencoder to
extract the new features. They implemented an autoencoder with three hidden layers
(with 500, 100, and 500 nodes, respectively); the activation function between each
couple of layers is the tanh and the objective function to be minimized is the
logloss error between the original and the reconstructed data. Once the autoencoder
was trained, they obtained 100 new features from the bottleneck layer. These features
were used to execute k-means clustering to obtain the patient subgroups and perform
survival analysis. The authors demonstrated the effectiveness of the dimensionality
reduction performed with the autoencoder, by comparing the survival analysis
obtained after a classical dimensionality reduction by using PCA and without using
any dimensionality reduction techniques. They showed that the survival curves
obtained in the last two cases were not significantly separated.
5.2 MULTIMODAL NEUROIMAGING FEATURE LEARNING WITH DEEP
LEARNING
Taking advantage of the multiple modalities available in neuroimaging, deep
architectures have been used to discover complex latent patterns emerging from
the integration of multiple views. In Ref. [31], stacked autoencoders are used to
learn high level features from the concatenated input of MRI and PET data. The
extracted features are then used to train a classifier for the diagnosis of Alzheimer
disease. Results showed that this approach outperformed traditional methods
and shallow architectures. Similarly, in Ref. [32], MRI and PET are used in combi-
nation to derive a shared feature representation using restricted Boltzmann machine,