Page 227 - Machine Learning for Subsurface Characterization
P. 227

Deep neural network architectures Chapter  7 197























             FIG. 7.4 Schematic for training the GAN-NN architecture. The number of hidden layers in each
             network and the number of neurons in each hidden layer are determined by hyperparameter
             optimization.


             network is connected before the frozen generator. Only the three-layered
             neural network undergoes weight updates in the second stage of training.
             The three-layered neural network comprises 10-dimensional input layer, one
             8-dimensional hidden layer, and one final 2-dimensional hidden layer
             attached to the frozen generator. In doing so the three-layered neural
             network learns to transform the 10 inversion-derived logs into dominant
             NMR T2 features extracted by the generator network in the first stage of the
             training process. Overall, the two-stage training process ensures that the
             GAN-NN will generate similar NMR T2 distributions for formations with
             similar fluid saturations and mineral contents. After the training process is
             complete, for purposes of testing and deployment, the trained three-layered
             neural network followed by the frozen generator synthesizes the T2
             distributions of subsurface formations by processing the seven formation
             mineral content logs and three fluid saturation logs (similar to Fig. 7.3).
             Unlike the VAE-NN, the T2 distributions generated by GAN-NN are not
             smooth. Therefore, we add a Gaussian fitting process to smoothen the
             synthetic NMR T2 generated by the GAN-NN.

             4.4 VAEc-NN architecture, training, and testing

             VAEc-NN stands for variational autoencoder with convolution layer (VAEc)
             assisted neural network. As explained in Section 4.2, an autoencoder is a type
             of deep neural network that is trained to reproduce its high-dimensional input
             (in our case, 64-dimensional NMR T2) by implementing an encoder network
             followed by a decoder network [12]. VAEc-NN implements a convolutional
             layer in the encoder network of the VAEc to better extract spatial features
   222   223   224   225   226   227   228   229   230   231   232