Page 232 - Machine Learning for Subsurface Characterization
P. 232

202 Machine learning for subsurface characterization


               For each depth in the training dataset, the encoder network receives, one at a
            time, each element (i.e., one log) of the input sequence of 10 inversion-derived
            logs. Encoder updates the intermediate vector based on each element and
            propagates the updated intermediate vector for further updates based on the
            subsequent elements of the input sequence. Intermediate vector (also referred
            as encoder vector or context vector) is the final hidden state produced by
            encoder network. The intermediate vector contains information about the
            input sequence in an encoded format. Decoder learns to process the
            intermediate vector to compute an internal state for generating the first
            element of the output sequence constituting the 64-dimensional NMR T2.
            For generating each of the subsequent elements in the output sequence, the
            decoder learns to compute the corresponding internal states by processing
            the intermediate vector along with the internal state calculated when
            generating the previous element in the output sequence.
               The encoder and decoder networks of the LSTM network architecture are
            collections of LSTM modules. The encoder and decoder networks comprise
            10 and 64 chained LSTM modules, respectively. Each module has gates
            controlling the flow of data. Controlled by various gates the LSTM can
            choose to forget or update the information flowing through the modules. The
            encoder compresses the seven formation mineral logs and three fluid
            saturation logs into a single intermediate vector. Then the decoder
            sequentially decodes the intermediate vector to generate the 64 elements of
            the target NMR T2 sequence. The 10 inversion-derived logs are taken as
            sequence, and the encoder processes one of the 10 logs at each timestep to
            generate an internal state. After processing all the 10 input inversion-derived
            logs, the encoder generates the 15-dimensional intermediate vector v in the
            last step. The intermediate vector v is fed to each module in the decoder.
            The decoder modules sequentially generate each element of the output NMR
            T2   sequence.  Each  decoder module  processes the  15-dimensional
            intermediate vector v along with the internal state from the previous module
            to construct a corresponding element of the output sequence. The full
            synthesis of NMR T2 for a single depth requires 64 timesteps. The loss
            function used for the LSTM model is mean squared error function, like the
            second training steps of the previous three neural network architectures. The
            optimizer used to train the LSTM model is RMSprop that updates the
            weights of neurons based on the loss function during the backpropagation.


            4.6 Training and testing the four deep neural network models
            NMR T2 distribution log and inversion-derived mineral content and fluid
            saturation logs are split randomly into testing and training datasets. Data
            from 460 depths were used as the training data, and other data from 100
            depths were used as the testing data. In a more realistic application, the
            dataset should be of larger size for robust development of the deep neural
   227   228   229   230   231   232   233   234   235   236   237