How to predict reservoir performances with uncertainty? – Some snacks to go with afternoon tea!

It takes ONLY 3 mins to read this article.

Uncertainty is an uncomfortable position. But certainty is an absurd one – Voltaire.

This is what every reservoir engineer knows:

In a closed-loop reservoir management system, continuous updating of reservoir model is required when new data (e.g. 4D seismic, production data, pressure measurements) become available, without returning the model back to the initial stage. This procedure lasts through the entire life cycle of a field. However, it has been a long-standing challenge in the industry to effectively update reservoir models, and predict future reservoir performances with uncertainty.

Yes, here I am talking about history matching.

Before automatic history matching became popular, obtaining even a single history matched model demands a substantial amount of effort and good engineering judgments via manual history matching procedures. Thanks to the dramatically increased computational power in the past decade, remarkable progress has been made to automatically obtain many realizations of reservoir models that match large amounts of reservoir observations, while the uncertainty of the data and model can be properly quantified and represented.

Then, what are the prevalent methods used in history for assisted history matching?

To achieve such reservoir model optimizations, many methods have been developed to minimize the difference between observations and predictions from the simulation model. Those methods can be divided into two categories: gradient-based methods and non-gradient based methods. Gradient-based methods obtain the minimum of the objective function by calculating local gradients of the unknown parameters. However, in practice, the calculation of the gradients is time-consuming and not straightforward. Non-gradient based methods do not require any computation of gradients and often treat the function evaluation (for example, reservoir simulation) as a “black box”. Evolutionary Algorithms, Simulated Annealing, and many other methods fall into this category. One main drawback of these methods is that they conduct exhaustive searching of reservoir parameters, hence requires hundreds or thousands of simulations, which demands large CPU time.

Now, here is the best part.

The ensemble-based methods, such as the Ensemble Kalman Filter (EnKF) and Ensemble Smoother being tested by iRes-Geo, have been successfully applied to many reservoirs. These type of ensemble based methods estimate a large number of model parameters by assimilating different types of data, and it can be readily coupled with reservoir simulators for automatic history matching. It uses an ensemble of reservoir models to calculate the covariance between the model input parameters and the model responses. The covariance is considered as a gradient, which is then used to minimize the misfit function. More importantly, they provide an ensemble of updated models to adequately capture the uncertainty in reservoir model history matching and performance predictions. This makes it possible to conduct a risk assessment for effective reservoir planning and management.

At last,

we have to recognize that, despite the rapid progress in computation power and advanced optimization algorithms, no single method alone is the solution to every model updating task.  However, the emerging of the ensemble-based methods has proved to be a rather effective model updating solution for real-life problems, given its advantages in uncertainty assessment during the data assimilation.

Author: Dr. Junjian Li, iRes-Geo Beijing Project Center.

Senior Reservoir Engineering Advisor, Formerly Senior Reservoir Engineers at RIPED, PetroChina. 20+ years experience in advanced reservoir engineering, reservoir simulation, reservoir history matching.

If you find this article useful, please follow our LinkedIn page. If you have any questions or want to discuss the similar problem that you have, please feel free to contact us.