The influence of geological development factors such as reservoir heterogeneity needs to be comprehensively considered in the determination of oil well production control strategy. In the past, many optimization algorithms are introduced and coupled with numerical simulation for well control problems. However, these methods require a large number of simulations, and the experience of these simulations is not preserved by the algorithm. For each new reservoir, the optimization algorithm needs to start over again. To address the above problems, two reinforcement learning methods are introduced in this research. A personalized Deep Q-Network (DQN) algorithm and a personalized Soft Actor-Critic (SAC)algorithm are designed for optimal control determination of oil wells. The inputs of the algorithms are matrix of reservoir properties, including reservoir saturation, permeability, etc., which can be treated as images. The output is the oil well production strategy. A series of samples are cut from two different reservoirs to form a dataset. Each sample is a square area that takes an oil well at the center, with different permeability and saturation distribution, and different oil-water well patterns. Moreover, all samples are expanded by using image enhancement technology to further increase the number of samples and improve the coverage of the samples to the reservoir conditions. During the training process, two training strategies are investigated for each personalized algorithm. The second strategy uses 4 times more samples than the first strategy. At last, a new set of samples is designed to verify the model's accuracy and generalization ability. Results show that both the trained DQN and SAC models can learn and store historical experience, and push appropriate control strategies based on reservoir characteristics of new oil wells. The agreement between the optimal control strategy obtained by both algorithms and the global optimal strategy obtained by the exhaustive method is more than 95%. The personalized SAC algorithm shows better performance compared to the personalized DQN algorithm. Compared to the traditional Particle Swarm Optimization (PSO), the personalized models were faster and better at capturing complex patterns and adapting to different geological conditions, making them effective for real-time decision-making and optimizing oil well production strategies. Since a large amount of historical experience has been learned and stored in the algorithm, the proposed method requires only 1 simulation for a new oil well control optimization problem, which showing the superiority in computational efficiency.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10362194 | PMC |
http://dx.doi.org/10.1016/j.heliyon.2023.e17919 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!