Scientific Computing @ HPC Scale
Enabling experts to browse terabyte seismic on the web
A New Data Challenge
New seismic acquisition techniques are resulting in very large datasets that can be enlightening only if they can be successfully processed. Existing software was workstation based, unable to scale to terabyte-scale data. Potential billion barrel insights were left to cumbersome batch processing, with month-long timeframes and report-based workflows hostile to intuitive insight.
Terabytes on the Web & user Adoption Risk
  • TERABYTES IN A BROWSER - The product is intended for expert users who are accustomed to dynamically interacting with terabytes of seismic data on their workstations. Delivering the same type of seamless experience on the web poses serious architectural challenges.
  • USER ADOPTION RISK - Although experts such as seismic interpreters, processors and geophysicists would be using the system, executing many of the workflows required knowledge of new methods for interacting with complex multi-dimensional data. Training would be minimal, so this "know-how" needed to be available to users in the software for the task at hand, else it could inhibit user adoption.
  • SCALING TO PRODUCTION - The science had been validated. Now the challenge was scaling the technology from a handful of PhDs in R&D to full production.
    (Images courtesy of CSGrecorder.com)
“Modern wide azimuth seismic datasets can’t be interpreted on classic workstations; how can we take advantage of these massive datasets and still work at interactive speeds?”
ReaL Time insights
Expero designed a distributed high performance compute and rendering system, enabling terabyte data to be interactively processed at 10 frames per second on existing compute clusters with users in a remote web browser-based reactive user experience.