Page 223 - Machine Learning for Subsurface Characterization
P. 223
Deep neural network architectures Chapter 7 193
to the neural network-based synthesis that mitigates overfitting. Unlike the
VAE-NN, GAN-NN, and VAEc-NN architectures, the LSTM architecture
considers the NMR T2 synthesis problem as a transformation task (similar to
many-to-many language translation), wherein certain subsamples of the 10
inversion-derived logs are used for synthesizing amplitudes for certain
subsamples of the 64 T2 bins. LSTM architecture learns to relate the T2
spectra to sequential variations between various combinations of inversion-
derived logs. All layers in the four neural networks are fully connected
except the convolution and max-pooling layers in the VAEc and the
recurrent layer in the LSTM. Fully connected layers connect every neuron in
one layer to every neuron in the previous layer.
4.2 VAE-NN architecture, training, and testing
VAE-NN stands for variational autoencoder (VAE) assisted neural network. An
autoencoder is a type of deep neural network that is trained to reproduce its
high-dimensional input (in our case, 64-dimensional NMR T2) by
implementing an encoder network followed by a decoder network [12].A
variational autoencoder (VAE) provides a probabilistic manner for
describing an observation in latent space, such that the encoder describes a
probability distribution for each latent attribute. On the encoder side, a
neural network learns to project the high-dimensional input on to a low-
dimensional latent space (in our case, two-dimensional space). Following
that a decoder neural network learns to decode a vector in the low-
dimensional latent space to reproduce the high-dimensional input. With this
bottleneck structure an autoencoder learns to extract the most important
information when the input goes through the latent layers. Therefore, an
autoencoder is an effective way to project data from a high dimension to a
lower dimension by extracting the most dominant features and
characteristics. A variational autoencoder is a specific form of autoencoder,
wherein the encoding network is constrained to generate latent vectors that
roughly follow a unit Gaussian distribution [13]. In doing so, a trained
decoder can be later used to independently synthesize data (similar to the
training data) by using a latent vector sampled from a unit Gaussian
distribution. More details about the latent layer are provided in the
subsequent description of the VAEc architecture in Section 4.4. VAE
arranges the learned features with similar shapes close to each other in the
projected latent space, thereby reducing the loss in the reproduction of input.
As mentioned earlier, the synthesis of NMR T2 distributions using VAE-NN
requires a two-stage training process prior to the testing and deploying the
neural network (Fig. 7.2). In the first stage of training the VAE-NN, the
VAE is trained to reconstruct the NMR T2 in the training dataset by
extracting the dominant features of the NMR T2 distribution. Encoder
network has two fully connected layers, 64-dimensional input layer followed