Production data analysis methods are reliant upon the quality of data — particularly the diagnostic aspects of these analyses. In particular, data artifacts that obscure the view of the reservoir signal limit the reliability of interpretation and subsequent analysis by even the most experienced evaluator. It is fact to state that the current algorithms used in the industry often amplify noise and are inherently unstable in the presence of outliers.
This work proposes to significantly improve data processing using modern algorithms for data filtering and statistical (Bayesian) methods for estimated derivative functions in the presences of random and biased noise. Put simply, this work will improve the ability to interpret time-rate-pressure data and increase confidence in the diagnosis of well performance behavior.
The cost of computational power available in common desktop computers, measured in dollars per floating point operation, has decreased more in the last ten years than in any preceding ten-year period. The level of computation available has inspired an immense amount of interest in the development of free open source software projects to make use of the newly available resources and enable access to legacy code for numerical processing with the features of modern programming languages. Having access to low-level libraries abstracted to high-level languages provides the capability to apply complex statistics with (relatively) simple computer modules. In turn, this capability gives rise to a new concept; specifically, the ability to create modules/programs that reason about our intentions, as opposed to our implementations.
In this work, we illustrate two applications:
First, outlier filtering that makes no assumptions about the distribution or density of the data, i.e. what is or is not an outlier; only that a regression of a known model to the data must be statistically robust (not influenced by outliers). We evaluate the method by application using publicly available data to every horizontal well in the Midland and Delaware basins put on production since January 2013. Second, the evaluation of data derivatives by Bayesian inference. The use of Bayesian methods provides two advantages: 1) a distribution of non-unique results enables us to visualize uncertainty due to data quality, and 2) automatic hyperparameter optimization, which in this case is REGULARIZATION for smoothness of the derivative. We evaluate this methodology by com-parison to the existing methods for cases in the petroleum literature, and for field cases using permanent-downhole gauge data.
We observe that the data filtering method works well and applies generally to any data set for which we have an a priori model assumption (e.g., a function, y(x)). In a practical sense, we can incorporate the regression of the assumed model into the filtering algorithm or use a simpler model for filtering and pass the output along for further processing. We observe that the derivative computation method yields smoother, less noisy derivatives than existing sampling and smoothing methods.
We propose new methodologies for data smoothing and derivative analysis and illustrate the application of these methods to the petroleum literature. These new tools are relevant for any engineer performing production (and pressure) data analysis — whether empirical production decline analyses or physics-based rate-transient/pressure-transient analyses.