Why are we still calibrating the rock-physics models?
Rock-physics empirical relations, theoretical models, and heuristic approaches are used successfully to aid quantitative characterisation of the sweet spots or monitoring of the production induced changes in the reservoir. Along with applicability to the specific reservoir scenario, calibrability is an indispensable criterion for selection of the optimum rock-physics model. Calibration as a concept could mean differently across different disciplines, therefore clarification is required in the context of this paper. Here, calibration is referred to adjustment of the physical or nonphysical constants or coefficient of an existing rock physics model to establish the link between the variations in measured density and sonic data to rock and fluid properties of the reservoir under study. Calibration is crucial for any rock-physics model, but once achieved, it is likely that several models could adequately characterise the link between the reservoir properties and the observations in the seismic domain.
Why is it hard to calibrate?
However, this is not a trivial task and a number of challenges for calibration include
- The available measured data traditionally used for calibration is limited to wireline logs from wells with up to several kilometers spacing or laboratory measurements on centimeter scale core samples. Therefore, undersampling and risk of extrapolation beyond the range of the data is almost always a potential concern when generalising the calibrated rock-physics models to the whole reservoir. Selection bias towards good quality reservoir is another potential issue, as it is common to drill most of the wells in the sweet spots.
- Calibration becomes of an underdetermined nature and suffers from nonuniqueness when the parameters in the model outnumber the measured data which is typically limited to sonic (DT, DTS) and density. This is particularly the case when insight into the systematic relation between the reservoir and elastic properties is sought through theoretical models with several parameters. In practice, despite their often superior theoretical framework, the potential benefits of such models could be sacrificed over the improved robustness and reliability of a simple empirical model with less number of adjustable parameters (e.g. multilinear regression).
- No enough physics. Replicating the in-situ reservoir mechanisms (e.g. depletion, fluid substitution) and monitoring the rock and fluid response at seismic scale is impractical at laboratory conditions. Fürre et al. (2009) observed that the seismic 4D signal due to pressure depletion in the Snorre Field could not be modelled based on stress-sensitivity curves from the laboratory measurements on core samples. As an alternative, they used repeat well-log data to calibrate the stress-sensitivity curves. Amini and MacBeth (2015) proposed an in-situ calibration approach by comparing the synthetic and observed 4D response to pressure variations in the fully water flooded zone near water injectors. Their 4D data-driven calibration suggests a higher rock stress-sensitivity around injectors than the stress-sensitivity from laboratory measurements. Nevertheless, as is shown in Figure 00, there is a trade-off between representing the in-situ reservoir conditions and maintaining the precision of the measurements and keeping the involved uncertainties under control. While the data becomes more representative of the in-situ conditions by moving from core to wireline-log and seismic data, the uncertainty increases as the measurements become more indirectly related to reservoir properties.
- Too much simplification. Most of the theoretical rock physics models are based on an analytical solution to rationalise the observed variations in the elastic properties via one or more physical constants (e.g. pore shape or aspect ratio, coordination number, contact cement, critical porosity, consolidation factor, etc). The theoretical models provide low order approximations of underlying relations through isolating the physical properties deemed to be the key controlling factors. However, devising an analytical solution necessitate drastic simplification of the rock heterogeneity. In addition to aforementioned nonuniqueness issue often associated with calibration of such models, it is very challenging to gain a data-driven insight into their physical constants. This challenge is three-fold. Firstly, some of the constants represent a rock quality (e.g. consolidation factor) that is not measurable. Secondly, the values for the constants that could be measured (e.g. pore aspect ratio from thin sections) may not necessarily agree with the values required to fit the model to the observations. This is because these models do not take the actual complexity of the rocks into account due to their simplification and assumptions, therefore, the constants derived through calibration implicitly capture the variability of the other missing properties. Thirdly, these models in their original form are only applicable to a specific lithology under specific conditions (e.g. clean sandstones) and their extension to more generic real-life applications is not clear. Therefore, in practice, the rock physics constants that bear a theoretical physical significance are dealt with as fitting parameters and care must be taken in attributing interpretational importance to such constants.
- Suitability of the input data for rock physics calibration is an overlooked aspect of calibration. Petrophysical evaluations are the input data for the rock physics analysis, and it is typical that rock physicists (especially those with geophysical background) could take the provided curves of porosity, shale content and saturation for granted, however, it should be acknowledged that some associated uncertainty. This is particularly the case for shaley sand systems, where the discussions on effective versus total porosity, clays versus shale, clay bound water. Different approaches for petrophysical analysis of such systems might be fit for the formation evaluation purposes but not necessarily for rock physics analysis. In fact, “Rock physics dedicated petrophysical evaluations” could be the subject of a separate paper. The data editing and identification of unreliable log data and mud invaded zones is a vital preliminary step prior to any log based rock physics evaluation, the details of which are covered in the petrophysics and rock-physics literature (e.g. Smith, 2011; Simm, 2007; Alberty, 1994; Walls and Carr, 2001). This signifies it is important to make sure there is a collaborative effort between rock physicist and petrophysicist.
How to “calibrate” it?
In applications where closing the loop between the reservoir (geological or dynamic simulation) model and seismic data is sought after, rock physics model is an integral element of such workflows to establish the link between the parameters in the reservoir model to elastic parameters. It is therefore very important to understand the definition of the parameters in the engineering domain and establish the underlying relationship with the petrophysical domain (Amini, 2014). This allows for the same formulation to be developed in the petrophysical domain and applied to the parameters in the simulation model. However, this is tricky for dynamic fluid flow models in particular, and petrophysical variations are condensed into two parameters effective pore volume and effective transmissibility through porosity and net-to-gross (NTG).
It is important to recognise the different perspectives in definitions of porosity and NTG from engineering and petrophysical perspective and establish the underlying relationships. Therefore, in the calibration of the rock physics models using wireline log data care must be taken to ensure that the parameters involved in the rock physics model are also represented in the reservoir model. The choice of or rock physics models per facies that are not represented in the model or using several cut-offs will introduce complexities in establishing such relationships. Care must be taken when using these equations because the choice of cut-offs in the NTG definition (Worthington and Cosentino, 2003; Menezes and Gosselin, 2006) complicates the equations in Table 3‑1
Therefore, at iRes-Geo, we are utilizing deep learning approaches to exploring the untouched physics and accuracy for 3D and 4D seismic to reservoir rock physics.
Meet the serial authors
Dr. Hamed Amini, is a 10+ years specialist in developing 4D QI tools. He is specialised in seismic modelling and interpretation; closing the loop between the reservoir model and 3D/4D seismic data, simulator to seismic modelling (sim2seis) and seismic to simulator modelling (seis2sim) approaches; finite-difference elastic seismic modelling; petro-elastic modelling.