Page 231 - Machine Learning for Subsurface Characterization
P. 231

Deep neural network architectures Chapter  7 201


















             FIG. 7.6 Schematic for training the LSTM architecture. The number of hidden layers in each
             network and the number of neurons in each hidden layer are determined by hyperparameter
             optimization.

                In this study, we use the LSTM network for sequence-to-sequence modeling
             (i.e., many-to-many mapping), wherein for each sequence of one feature vector,
             the LSTM network learns an intermediate vector that can be decoded to
             generate a distinct sequence of one target vector. LSTM first encodes the
             relationships between the various combinations of inversion-derived logs and
             the amplitudes of various combinations of T2 bins into an intermediate
             vector. Consequently the use of LSTM frees us from knowing any of the
             mechanistic rules that govern the multitude of physical relationships between
             the various logs and those between the logs and physical properties of the
             formation. LSTM-based sequence-to-sequence model generally contains
             three components, namely, encoder, intermediate vector, and decoder. The
             encoder is tasked with learning to generate a single embedding (intermediate
             vector) that effectively summarizes the input sequence, and the decoder is
             tasked with learning to generate the output sequence from that single
             embedding.
                Unlike the three other neural network architectures discussed in the previous
             sections, LSTM training requires only one stage. LSTM excels other deep
             neural network architectures for data that have long-term dependencies and
             unfixed lengths of input/target vectors. LSTM sequence-to-sequence
             modeling has been successful in language translation. Language translation
             relies on the fact that sentences in different languages are distinct
             representations of one common context/theme. In a similar manner, different
             subsurface logs acquired at a specific depth are distinct responses to one
             common geomaterial having specific physical properties. We implement
             LSTM to capture the dependencies among the various T2 bins in the T2
             distribution and also to capture the dependencies between various
             combinations of inversion-derived logs and the amplitudes for various
             combinations of T2 bins.
   226   227   228   229   230   231   232   233   234   235   236