Page 225 - Machine Learning for Subsurface Characterization
P. 225
Deep neural network architectures Chapter 7 195
FIG. 7.3 Schematic for testing or deploying the VAE-NN model.
After the VAE is trained, in the second stage of training the VAE-NN, a four-
layered fully connected neural network followed by the frozen pretrained decoder
learns to relate the three formation fluid saturation logs and seven mineral content
logs with the NMR T2 distribution. For the second stage of training, the trained
decoder (the second half of the VAE described in the previous paragraph) is
frozen, and a four-layered neural network is connected before the frozen
decoder. Only the four-layered neural network undergoes weight updates in
the second stage of training. The four-layered neural network comprises
10-dimensional input layer, two 30-dimensional hidden layers, and one final
6-dimensional hidden layer attached to the frozen decoder. In doing so the
four-layered neural network learns to transform the 10 inversion-derived logs
into dominant NMR T2 features extracted by the encoder network in the first
stage of the training process. Overall the two-stage training process ensures
that the VAE-NN will generate similar NMR T2 distributions for formations
with similar fluid saturations and mineral contents. After the training process
is complete, for purposes of testing and deployment, the trained four-layered
neural network followed by the frozen decoder synthesizes T2 distributions of
subsurface formations by processing the seven formation mineral content logs
and three fluid saturation logs (Fig. 7.3).
4.3 GAN-NN architecture, training, and testing
GAN-NN stands for the generative adversarial network (GAN) assisted neural
network. Like the VAE-NN, the GAN-NN also undergoes two-stage training,
such that the first stage focuses on training the GAN and the second stage
focuses on training a three-layered neural network followed by the frozen
pretrained generator. GANs have been successfully applied to image
generation [15] and text to image synthesis [16]. For our study, GAN is a type
of deep neural network that learns to generate 64-dimensional NMR T2
distribution by having competition between a generator network (G)and a
discriminator network (D). The generator network G learns to upscale
(transform) random noise to generate synthetic T2 distribution that is very
similar to the original 64-dimensional T2 distribution, whereas the
discriminator network D learns to correctly distinguish between the synthetic

