Inversion of 2D Remote Sensing Data to 3D Volumetric Models Using Deep Dimensionality Exchange

Many companies are continuously exploring for and monitoring the stability of CO2 reservoirs in subsurface Earth to sequester CO2 underground in order to, in part, help mitigate climate change through atmospheric emission of this greenhouse gas. In order to understand the geometry of the CO2 reservoir and monitor it for any leaks (also referenced herein as CO2 “plumes”), these companies transmit electromagnetic (EM) waves through the subsurface layers of the Earth from a subsurface location inside a wellbore and receive the signals in an array of receivers on the surface of the Earth. 

This raw EM field is then inverted to a physical model of the resistivity of the layers of the subsurface Earth using a frequency-domain solution to Maxwell’s equations:

The traditional solution to this problem is out of scope for this article, so for reference read the documentation for SimPEG’s numerical solution. 

The problem geophysicists have with this traditional solution is that in an industrial application with large volumes of data the amount of iteration to convergence can lead to weeks or months of compute time. Our answer to this problem was to use a deep learning framework to perform the inversion as a way to drastically reduce compute time at the inference stage. The value proposition is that, though it’ll take a long time to train an algorithm to perform this inversion, once trained, a deep learning system can perform inference (inversion) in milliseconds instead of weeks.

The approach we took was to design a deep learning architecture that would perform well on real EM data when trained on a corpus of mostly synthetic data with a few case examples of real data, as acquiring real data is extremely expensive. The synthetic training corpus we generated used thousands of different Earth strata and reservoir geometries, plume locations, and transmitter/receiver positions. Below is an example of a synthetic EM data Earth model pair:

Using Deep Dimensionality Exchange

The left pane of the above image is the real component of the electric field in the X direction (east-west) as acquired by the receiver array at the Earth’s surface, where the colors correspond to the magnitude of the field in volts per meter. The right pane is a X-Y (horizontal) slice out of the 3D Earth model at the depth of the reservoir, where the colors correspond to resistivity in Ohm meters.

During my presentation at ODSC East, I’ll discuss a couple of failed generative-system inversion methodologies our team tried, and explain why these designs failed. The architecture we eventually succeeded with was an encoder-decoder fully convolutional network (FCN) which included a peculiar transformation between the encoder and decoder latent structures from 2D latent information to 3D latent information. We’re calling this tensor reshaping Deep Dimensionality Exchange (DDE), as it effectively reassigns 2D latent information pixels into 3D latent information voxels. The same procedure works in any dimensionality exchange, provided there is sufficient information mixing the in the latent layers preceding and following the DDE.

Using Deep Dimensionality Exchange

Though the exact parameters of the network we built aren’t important for this overview, the image above shows an example of a generic encoder-decoder architecture with the DDE reshape layer joining the encoder and the decoder. The unpooling operation is equivalent to interpolation, and the convolution transpose layers are approximately “deconvolution” or “inverse convolution”.

In the training scheme, the multichannel input consists of four input planes, the real and imaginary components of the electric field in the X and Y directions (the magnetic field was ignored in this experiment). The output is the resistivity model of the subsurface Earth. The objective metric is the mean squared error between the predicted Earth resistivity model and the known Earth resistivity model.

During tuning of the model, we ran into model convergence stability issues because of the extreme similarity of the input domain samples. During my presentation at ODSC East, I’ll discuss the approach we took to differentiate the input samples in a physically meaningful way. Interestingly, our feature engineering pipeline was constructed through an empirical process by evaluating sample diffs in the training dataset after a multitude of transforms were performed. In this case, we decided to do explicit feature engineering rather than increasing the number of trainable parameters in the model through increasing the model depth in order to reduce a severe overfitting problem we ran into using the encoder-decoder architecture.

Ultimately, the project was a success. Our multiple failures and custom architecture taught us quite a lot about the application of encoder-decoder architectures to the multidimensional image domain, and the success of the deep dimensionality exchange protocol in this work will prove useful across industries. Join me at ODSC East for a deep dive, and as this work is still in progress, come prepared with questions or suggestions, as I’d love to explore audience members’ ideas for approaching this problem.

Thanks go to Expero for allowing me to write about this project, GroundMetrics as a participant in the generation of synthetic data, grant writing, and domain expertise, and to the DOE SBIR as the funder of the project. Please visit Expero’s blogs and lightning talks pages for more detailed content on this and other projects.

Contact Us

We are ready to accelerate your business forward. Get in touch.

Tell us what you need and one of our experts will get back to you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.