
The United States Department of Energy has, for the last several decades, routinely been in competition for operating the most powerful supercomputers in the world.
In June 2018 it fired up “Summit,” which, according to NBC News, “has been clocked at handling 200 quadrillion calculations a second (or 200 petaflops). That's more than twice as fast as the previous record-holder, China’s 93-petaflop Sunway TaihuLight, and so fast that it would take every person on Earth doing one calculation a second for 305 days to do what Summit can do in a single second.”
But what does that have to do with oil? Shawn Bennett, Deputy Assistant Secretary, Office of Oil and Natural Gas, U.S. Department of Energy (DOE) spoke in Regina at the Williston Basin Petroleum Conference on May 29. He said the DOE is looking at applying its big data computational abilities to analyzing geology and completions in the oilpatch.
In total, the DOE operates five of the ten most powerful computers on the planet.
“When we’re looking at that big data, we’re trying to see how we can use that supercomputing process, to see if there is an opportunity for us to use super computing in oil and gas development, not for an individual company’s basis, but to unlock some of these questions that we have,” Bennett told Pipeline News.
“When you look at predictive analytics and you look at big data, you need that very fast supercomputing power to potentially unlock some of these mysteries in the shale. So we are in the early stages of developing a program where we can hopefully utilize the supercomputer capacity to unlock some of these universal mysteries of oil and gas.”
When asked how soon they could do this, he joked that “My boss asked how quickly we can get it done, too.”
“In order to compile the data, work out the data with companies, and have that conversation, we have to gather that data, big data. It means a lot of data has to be acquired. So we’re in the very beginning process of acquiring data and seeing if there’s an opportunity to start looking at different algorithms to go at it.
“It’s not going to be a next year thing. But hopefully, in the next few years, we’ll have some questions answered.”
As an example, he said taking a subset of data from a basin to look at anomalies and similarities
“There’s been a lot of data that’s been acquired by these companies over the last decade of field development. Being able to clean up that data, use that data, and to start to see similarities and new predictive analytics through algorithms and physics-based analysis, and hopefully be able to increase the EUR through that big data approach, through these supercomputers,” Bennett said.
“When you look at big data, we know, right now, what works. But ultimately we want to improve resource recovery, the EUR, the estimated ultimate recovery, of these wells. And by doing that, going through these massive amounts, reams and reams of data. The problem with all these reams of data is it takes months, even years, to compile that data and to be able to understand it better. With those supercomputers, if we can do it in a more realtime manner, we could have realtime changes to the drilling program, whether it’s the drilling portion or completions, for each well.”