Page 266 - Machine Learning for Subsurface Characterization
P. 266

230   Machine learning for subsurface characterization


            acquire” input logs and the NMR T2 distribution. This two-step training process
            increases the robustness of the log synthesis. The fourth model is long short-
            term memory network that processes the easy-to-acquire logs as a sequence
            and tries to find a corresponding sequence of NMR T2 distribution.


            4.1 Variational autoencoder assisted neural network
            VAE-NN is trained in two steps. First step uses the variational autoencoder
            (VAE) network comprising encoder and decoder networks. VAE learns to
            abstract and reconstruct the NMR T2 distribution. The encoder projects the
            NMR T2 from training dataset to a 2D or 3D latent space, and the decoder
            reconstructs the NMR T2. The decoder takes the encoded latent vector as
            input and decodes it into NMR T2 distribution. The goal of the first step is
            to reproduce the NMR T2 distribution. The generation of the latent
            variable involves a sampling process from Gaussian distribution, the VAE
            is trained to project NMR T2 distribution with similar features to similar
            space, which reduces the cost of reconstruction. After the first step of
            training, the decoder network has learnt to generate a typical NMR
            response by processing the latent vectors.
               Decoder trained in the first step of training is frozen to preserve the VAE’s
            learning related to the NMR T2 distributions in the training dataset. In the
            second step of training, a simple fully connected ANN with 3–5 hidden
            layers is connected to the trained decoder (Fig. 8.4). The “easy-to-acquire”
            input logs are fed to the ANN, and the ANN is trained to generate the latent
            vector for the decoder network. The latent vector will be decoded into NMR
            T2 distribution by the decoder. In doing so, the second step of training
            relates the easy-to-acquire logs to the NMR T2 distribution. Fig. 8.5
            illustrates the manifold learned by the VAE in the first step of training. It
            basically is an abstraction learnt by the decoder network from the NMR T2
            distributions in training set. The gradual and smooth changing NMR T2 in
            each of the subplots is what the VAE learned from the training set in the
            first step of training. This learnt manifold represents the essential features of
            the NMR T2 distribution.

            4.2 Generative adversarial network assisted neural network

            GAN-NN model follows a two-step training process, with some similarity to
            VAE-NN. In the first step, generative adversarial network (GAN) learns
            from the NMR T2 distribution in the training dataset; this involves training a
            generator network to reconstruct NMR T2 distributions with large similarity
            to those in the training dataset. This requires a discriminator network that
            evaluates the NMR synthesis achieved by the generator network. In the first
            step, the generator and discriminator networks are trained alternatively using
            only the NMR T2 distributions in the training dataset. First a random vector
   261   262   263   264   265   266   267   268   269   270   271