Interview with Boris Belozerov, head of the Gazprom Neft Science and Technology Center Digital Technologies and Geological Expertise Department

Boris Belozerov, head of the Gazprom Neft Science and Technology Center Digital Technologies and Geological Expertise Department, talks about the latest digital solutions for efficient prospecting and exploration.

Oil and Capital analytical magazine: part 1 and part 2 external-icon.png


The BigData and machine learning technologies, the latest modeling methods, open up new opportunities for the study and development of oil and gas fields, obtainment and processing of large amounts of geological, physical, chemical and other information, and making optimal decisions based on this information.

According to the Gazprom Neft Science and Technology Center (NTC), the introduction of intelligent systems and digital tools at all stages of exploration and development of petroleum fields allows increasing net present value (NPV) of assets by up to 20%.

In an interview with O&C correspondent Irina Rogova Boris Belozerov, head of the Gazprom Neft NTC Digital Technologies and Geological Expertise Department, tells about the most promising digital projects in the area of exploration and development of petroleum fields, many of which are still at the testing stage.

Boris Belozerov: Digital tools allow opening new horizons in petroleum exploration

– Today, it appears, Gazprom Neft and its Science and Technology Center are paying the most attention to the creation of tools to optimize the company’s exploration and production workflows. For this area, we have developed a line of unique digital solutions among which there are absolute know-hows. For example, this is the Digital Core project which is a digital core analysis laboratory.

Gazprom Neft NTC annually performs laboratory tests of about 3,000 meters of core samples and about 500 samples of reservoir fluid. Data obtained from these tests allow making reliable reserve estimates thus reducing various risks and increasing profitability of petroleum projects.

Digital Core

Why do we consider this project to be one of the most important in the Field Development discipline, and first of all for hard-to-recover reserves? It is so because today there are no any other ways to study properties of low-permeability reservoirs. To begin with, it takes a very long time to accomplish. Secondly, in any case it is impossible to recreate in the experiment exactly the same hydrodynamic conditions and processes as those taking place inside the reservoir in reality. To be fair, it is on the basis of laboratory experiments, which are carried out in vast numbers, that we have collected a large amount of information on core samples. But getting data in the lab is too expensive and too long to achieve.


What is another significant drawback of laboratory core testing methods? Having studied one sample in a lab we, one way or another, destroy the sample which means we lose the original physical and chemical properties and can not reproduce any new effects on the same sample. Therefore, within the Digital Core project we have launched the Digital Filtration Laboratory subproject which aims to create a prototype model or the so-called “digital clone” of the reservoir.

What is the essence: we extract a core sample from the well, put the sample in a high-resolution tomographic scanner and get, figuratively speaking, a digital copy of the productive formation. At the end of this process, all the structure and features of the sample are reproduced with a high degree of detail. With such a digital copy, we can further simulate various experiments. For instance, it can be the process of fluid filtration through a core sample. Or we can simulate effects made on the sample by different agents, etc.


This technique allows, firstly, getting a fairly quick solution because we are dealing with a model of an experiment, and secondly, there is no need to perform real tests: it suffices to study the properties and behavior of the digital clone of the reservoir and then send to the laboratory core samples to validate the modeling results, i.e., investigate specific processes to confirm and fine-tune the model. And if the physical experiment has revealed any deviations, we adapt certain parameters of the model and perform further studies on the digital core clones. This technique will be in demand to study all low-permeable targets, which are all our hard-to-recover reserves, first of all in the Bazhenov and Achimov reservoirs.

In addition to saving time and money, the Digital Core concept gives us the key advantage: access to the reservoir properties at the micro level.

This is especially relevant because in many cases the pore channels are too small to perform real tests. In particular, it is impossible to quickly and reliably develop the necessary pressure inside the pores to obtain objective data on fluids or water filtration rate in the laboratory. When held in the laboratory conditions, such tests may take about 9-12 moths to perform. The process is so long because we have to deal with microscopic pore sizes. Digital methods compensate all this and allow obtaining more accurate and good quality data on the properties of reservoirs with any permeability characteristics.

The second significant advantage is that we can unrestrictedly “perform” (simulate) digital experiments to get the maximum data on characteristics of the reservoir and choose the best solutions for it. First of all, we are interested in the optimal filtration conditions to understand the speed at which we need to inject water into injection wells to maximize oil recovery from the reservoir. If, on average, we can recover about 40% of potential reserves from the reservoir (due to different physical reasons) then using the enhanced oil recovery (EOR) methods the oil recovery rate can be increased to 60-80% depending on what chemical composition is used, in what proportions, in what volume, etc.

Therefore, we see great opportunities in using the Digital Core concept when solving such a problem as the focused selection of chemical compounds for the application of enhanced oil recovery methods.

Through the use of a digital model, we can find the same target-oriented approaches to petroleum reservoirs as, for example, in personalized medicine when diagnosis and treatment methods are selected in strict accordance with individual characteristics of the organism.

We have supplemented the basic Digital Core tools with the molecular modeling functionality where the program not only recreates the fluid and its flow rate but also shows how the molecules of oil, water and other borehole environment components and the molecules of the injected chemicals interact with each other. Having analyzed all the options, we can use machine learning methods to choose the most balanced chemical composition, unique to a particular well or area of the field.

In addition, we can upload a digital copy of the core material on the server and store it as long as necessary, until the need for this material arises again (to clarify or obtain new data or build a model to develop reservoirs with similar properties).

The Digital Core is one of the recent projects implemented by NTC. Currently, we are complementing it with various necessary functionalities.

Core microimage interpretation

Geological study of hard-to-reach reservoirs is associated with tests and comparative analysis of a large number of rock samples, including at the micro level. This is needed to obtain the most complete information about the structure of the reservoir and, above all, about the filtering properties of microscopic pores and grains. In this case, the most valuable source of data is the multiple core slice images obtained by microphotography.

For instance, looking at distribution of grains in the picture we can determine under what conditions a particular rock was formed and how fluid will be filtered through it.

This is a fairly narrow area of knowledge, so earlier information of this kind could be used, figuratively speaking, by one and a half or two specialists in the company, who are directly involved in this area. Together with our colleagues from the Moscow Institute of Physics and Technology Engineering Center, we have developed a tool that can analyze and interpret a huge number of microscopic images of rock sections based on computer vision technologies. In the course of processing, the computer finds and selects the necessary segments in the rock image noting all the important indicators and properties that can then be used by geologists or petrophysicists in their work. The same technology is applied, for example, in various graphics face recognition programs.

To begin with, the core microscopic image interpretation technology allows obtaining a large array of additional information which previously was not available to exploration geologists and geophysicists.

For example, computer image analysis can provide more accurate distribution of chromacity, porosity and other important indicators and physical properties of the studied formation.

Secondly, by extending the core study chain, we create a model of its digital clone which can be stored permanently on the server and become available when needed. That is, geologists may now not go to the fields but only have to work with images in the analog search system to return to a particular rock sample and compare its properties with samples from many other wells including from other petroleum production regions. This significantly reduces both duration of well survey operations and uncertainty of field data.


The very ability to understand the physical structure of the exploration target by one or several images significantly reduces potential risks and costs of field development projects. It has become so because the arsenal of geologists and petrophysicists is expanded with additional knowledge not only about the microscopic structure of the reservoir but also about the best well drilling and survey practices. With the help of new tools, this knowledge can be retrieved from the archive and replicated at other sites of the company. The core image interpretation system has already been applied at the Vostochno-Messoyakhskoe field.

In general, our task is to create a single field data warehouse. For the key problem of the entire global oil industry, not only the domestic one, is to create a database of digitized and so-called “formatted” or rendered images that serve as the basis for machine learning methods.

The fact is that operation of modern neural networks is based, one way or another, on the data comparison approach. The petroleum industry is in tremendous need for such comparable data today. This is why NTC is currently working to create a database of digitized and labeled information which will then be used to enable application of all modern technologies of machine learning, data processing, etc. 

Smart Exploration

Cognitive Geologist is another of the latest digital projects implemented by NTC, which allows optimizing the geophysical information and geological data processing workflow from the field survey to the final outcome of the exploration program.

The main objective of the Cognitive Geologist is to integrate various data obtained at all stages of exploration including seismics, exploratory drilling, coring, aerial photography, etc. All this information comes for processing into a single database. Through the use of machine learning methods and artificial intelligence, we actually replace a large number of working models for each type of information with one metamodel which allows us each time to obtain objective knowledge about prospectivity of a particular promising area.

This approach can be applied to any exploration project where there is a need to perform geophysical surveys, drill a prospecting well and find a place where petroleum can be produced in the most efficient manner.

That is, the purpose of the Cognitive Geologist tool is to study the exploration and potential production region as quickly as possible.  Up till not long ago, exploration projects took us, on the average, 3 to 5 years to accomplish. And it was processing and analysis of collected data and information that most of this time was spent for.

The automated data interpretation process allows us to reduce the field exploration time down to six months or one year as a maximum.

For instance, we achieve significant savings by having access to seismic data collected earlier at nearby fields or in neighboring regions with similar conditions. Then, using the same machine learning methods, we select such evaluation criteria that automatically distinguish the most promising areas from this block of geophysical information.


Another wide range of tasks for this project is related to optimization of the process to search for promising areas in hard-to-reach reservoirs. As I have already noted, the use of standard data interpretation methods for hard-to-recover reserves, which are now commonly used, is not possible. For example, when processing seismic data on hard-to-recover reserves with traditional methods we do not get a clear answer on what reservoirs we have to deal with. Today, all the information received is simultaneously integrated and interpreted. Thanks to this, we shorten the exploration time and get more data. With this new tool in hand, we have the opportunity to rerun old data or get missing data for new promising intervals. In addition, we shorten the process of obtaining an answer to the main question of any exploration project, where each particular well should be drilled at, by 2-3 years.

Smart Drilling

The introduction of intelligent systems in the drilling process is one of Gazprom Neft NTC’s top priorities due to the fact that the company annually drills more than a thousand wells. These are the most capital-intensive projects in the field development effort. In 2012, the GeoNavigator drilling control center was established in NTC on the basis of which the latest drilling technologies are introduced including digital ones. State-of-the-art digital drilling solutions are implemented in cooperation with IBM and the Skolkovo Institute of Science and Technology (Skoltech). We called this project Smart Drilling, by analogy with other cognitive models developed in NTC.


Often we drill a horizontal well which is 2.5 km long and 1.5-2 km deep. And we need to bring it into an oil reservoir which is 5 to 7 m thick. But the necessary information about where the drilling bit is begins to arrive only after some time. It is for these situations that we create a digital tool (based on machine learning methods, nonlinear regressions, etc.) which will transform drilling rig operation data into useful information about the drilling bit movement underground. This means, in fact, that using indirect information we will be able to determine composition of the rock and understand whether we are in the right interval or have gone outside and should urgently adjust the movement direction.

The Intelligent Drilling project is being tested so far only on a pilot basis at a number of fields operated by Gazprom Neft.

With regard to well completion technologies, the role of digital methods is increasing significantly as the completion technology itself is a very expensive thing. Digital technologies are not yet widely implemented in this segment but there is a great potential for their application.

The first and fastest thing we can do is to develop geomechanical models. In this case we can also use metamodelling or machine learning to make the geochemical model work in real time and provide us information about the well immediately when needed.

The second area is the search for optimal completion designs based on best practices. Let’s assume, we have already drilled a thousand wells and in some of them we could achieve the optimal completion (which at the same reservoir properties gives us the best performance of the well), and in some other wells the completion was most successful. Accordingly, by describing our past experience with the help of graphs and multidimensional models, we can combine it with the best practical results achieved at other fields including those operated by other companies. This way, the model will be constantly updated and optimized.


Wherever there is a task of optimizing something, digital solutions are always ahead of other ones. At the end of the first quarter of 2018, in the course of a well workover at the Yuzhno-Priobskoe field Gazpromneft-Khantos (Khanty-Mansi Autonomous Okrug), drilled and completed a sidetrack with a horizontal segment of more than 700 meters long, which is a record for the company. The well’s total length is 3.6 thousand meters.

Boris Belozerov: Digital methods are beginning to contribute to increasing profitability of oil and gas assets.

– As I have already mentioned, in order for the tools of any digital area, whether it is the Digital Core, Smart Drilling or Smart Production, to work as efficiently as possible the company first needs to develop a digital laboratory IT platform for the entire range of operational tasks from studying reservoir properties to recovering hydrocarbons.

Digital Reservoir Clones

One of our key projects in the area of digital support to field development operations is the search for analogs based on machine learning data, which is currently being developed in partnership with the Tomsk Polytechnic University, ECO-Tomsk LLC and IBM.

We work at exploration targets that always suffer from the lack of data. That is why we have to move to the analog selection methods especially when we go to new regions or new areas of existing fields. As for greenfield assets, data on such exploration targets are especially scarce, they are fragmentary, so the main question for the exploration team is: what can we find here? What are the ranges of reservoir temperature, pressure, filtration properties and other parameters? To answer these questions, we have to look for analogs summarizing the data from other areas and wells. It usually requires one or two people (as a rule, a geologist and/or petrophysicist) who spend 40% of their work time for this job, and only 20% of time is spent for decision-making and practical steps to develop the asset.

Therefore, we have started to create a tool that, first of all, will quickly search for analogs based on machine learning algorithms. Second, in the future the system will retrieve from the database the necessary parameter distributions on the basis of advanced similarity function. Then the geologist will analyze all the data in the already assembled form.

New tools are being introduced into the system so that it could not only provide parameter distributions but, for example, generate a typical production profile for analogous fields. Thus, after drilling a new well, we will know exactly under what conditions the well can reach the desired oil production rate.

This is not just a software product to find analogs but also a reliable analytical tool that answers the main question of exploration: what can we find here? In this capacity, it can work simultaneously for different groups of professional interests. There may be different sorts of analogs for geologists, petrophysicists, geophysicists, reservoir engineers, drilling engineers, etc. The system “understands” whom it is asked by and what about; it sort of “humanizes” artificial intelligence, but what is more important it speeds up and streamlines the field development process.

The next stage in development of this tool may be the integration of all its potential functionalities into a single information database.

Hydraulic fracturing and other well stimulation methods

The most important goal for the company’s business is to ensure the maximum return from the reservoir, especially in case of mature fields and hard-to-recover reserves. This is why such a great attention is paid to the well stimulation technologies.

One of our main ongoing projects is the modeling of hydraulic fracturing (hydrofrac) processes.

Gazprom Neft began to actively use hydraulic fracturing in horizontal wells in 2011; today this technology is applied in most of them (about 60%).  Now we also do a lot of molecular, or personalized, modeling of chemical compositions including those for hydraulic fracturing.

With the help of mathematical models developed in cooperation with the Moscow Institute of Physics and Technology Engineering Center to deal with hard-to-recover reserves we have created our own hydraulic fracturing simulator called ROST.

It allows simulating the growth of fractures in areas with hard-to-recover reserves like the Bazhenov formation or other low-permeable or fractured reservoirs.

There is still no technology in the world that would allow choosing the optimal and efficient method to produce oil from such reservoirs on the basis of simulation. If in conventional fields we were used to dealing with “conventional” physics then the Bazhenov formation, for example, is characterized with totally different regularities and a lot of non-linear dependencies that need to be calculated. 

I should say that the most efficient modern solutions to a variety of operational problems are based on processing of more and more information which, in turn, creates new challenges. 


Big data and supercomputers

Most typically, to solve problems associated with large data sets engineers resort to numerical modeling methods. For instance, in the aircraft industry dynamic characteristics of aircraft are studied in the same way as we study the reservoir. The lower permeability, the longer it takes to run the model.

Today, there are digital solutions that allow aggregating data on the basis of existing models and create a new metamodel which can reproduce the reservoir behavior in the most reliable manner using multidimensional regression tools and machine learning methods.

So we are working to apply such metamodels to our fields with hard-to-recover reserves, and not only to them.

But first of all, we use metamodelling when dealing with low-permeability reservoirs because it is impossible to estimate filtration in large-volume models by numerical methods.

To run such metamodels, we use a cluster of supercomputers of the Saint-Petersburg Polytechnic University. This is especially relevant for building a model of the Priobskoe field (developed by Gazprom Neft subsidiary Gazpromneft-Khantos), perhaps the most difficult in terms of creating a digital clone. The Priobskoe field digital model contains billions of cells (data elements). I believe we will continue using the unique computing capabilities of the Saint-Petersburg Polytechnic University in the future. Our task as the customers of digital solutions is to reduce digital models computational complexity as much as possible. It is a vital need because even a supercomputer cannot run these models at the desired speed.

I should also mention such a tool as an automated well test data interpretation system whose role is very important for the efficient field development operations. From the methodology standpoint, this process is not the most difficult one. A device is run into the well to measure parametric data which are then analyzed using machine learning tools. Today, all analysis operations are performed “manually”. If the well test data interpretation and analysis process is automated we will be able to correlate different well between each other through data integration. On the one hand, it will also contribute to getting rid of manual labor, on the other – we will create digital clones of wells and reservoirs.


Second life of petroleum fields

Another our project is the search for new promising intervals that we tentatively called “second life of petroleum fields”. Many wells have been drilled in license areas whose production life is counted in decades, but since field development was much easier in the past some parts of recoverable reserves (mostly in less promising areas and hard-to-reach horizons) have not been analyzed due to the lack of necessary exploration technologies.

Times have changed, and along with prospecting for new fields there is an urgent need to come back and increase production of old ones.

For this purpose, a tool was created that automatically analyzes expert data on certain intervals that have already been interpreted at different times. This includes those wells where similar intervals were found because they were the main target, not collateral.

Having a ready interpretation of the cross-section in hand, petrophysicists and geologists almost re-discover new oil and gas horizons studying old logging curves, seismic interpretations and other analytical data at a new technology level which ultimately allow drawing a conclusion about how promising this or that area is. If the area is recognized as promising then other experts are involved in the work to perform additional geological and geophysical surveys in the field. In this case we are not talking about the high well survey costs because the well has already been drilled and it only remains to run the device to the specified depths. And if we get an oil inflow from these intervals then the newly explored reservoir is added to total production of the existing well.

This technology, in particular, is successfully used to search for intervals that have not been involved in the operation of old wells in Gazpromneft-Noyabrskneftegaz and Gazpromneft-Muravlenko (two Gazprom Neft subsidiaries operating in the Yamal-Nenets Autonomous Okrug). For such assets, it is critical to find areas that can be further explored and added to the company’s balance sheet.

Preliminary expert analysis of such missed intervals, similar to those in a number of wells of the Priobskoe field, showed that the digital model allows identifying by 14% more oil saturated layers than showed by well logging data interpretation results.


I should also note that as part of the project to search for new promising intervals we have not just compared new and old data characterizing potential of our reservoirs.

We have built an automatic self-learning model which even in the first approximation tells people working on the project that such and such old wells have so many intervals which, according to the calculations performed by the machine, can be promising at such a probability.

The conclusions drawn by the machine are based on data obtained from similar cross-sections. The petrophysicist, having analyzed this information, may approve or disapprove the message sent by the artificial intelligence: put a “like” or “dislike”, just as in social networks.

If the expert puts a “like” then the quality of automatic data interpretation is confirmed by the expert’s personal competence. If the expert puts a “dislike” then it is the signal that the machine made a mistake or the level of additional production from the old wells does not fully satisfy the profitability norms or other important criteria, etc. The algorithm remembers the expert’s conclusions and further improves the model. Thus, the predictive ability of the system is constantly improved on the basis of machine learning methods.

Smart assistant to petroleum engineer

The Cognitive Assistant, unlike the Cognitive Geologist, is not just a digital tool but rather an intelligent platform that is trained to monitor all the competencies of a petroleum engineer suggesting necessary solutions or giving signals about the threat of an emergency situation. The system monitors the field in real time, tracks parametric data, and notes any deviations or patterns in indicators behavior. In addition, the Cognitive Assistant analyzes well operation performance. And if the well can work better the system will suggest optimizing certain operating parameters of the well. For example, open up the choke a little more, or run the device and make additional measurements.

Analytical tools of this project are partially being implemented already but we want to expand its functionality. First, the system needs to be equipped with new tools, and second, voice control needs to be added to it.


Efficient data mining

The oil and gas industry is lacking optimal digital solutions. First of all, it includes information that allows petroleum engineers to build digital models of reservoirs as well as various systems based on artificial intelligence: from smart exploration to efficient production.

Therefore, we need collaboration. A number of scientific and technology partnerships have already been established on the basis of Gazprom Neft NTC as part of the efforts to develop hard-to-recover reserves.

We are ready to open some of these technologies for data sharing purposes so that all our digital models could develop and improve their functionality. And only a part of technologies developed by NTC, which are tools for competition, will remain our own know-hows.

Today, all global players are involved in intelligent systems for oilfield services.

Neural networks, as a tool, are the same all over the world but we have them designed on a different principle: we introduce digital models where they have never been introduced in the West.

We create not just a digital function but an artificial intelligence for efficient management of oil production operations. According to consulting firms working in the digitalization area, over the last few years digital technologies have mainly been successfully implemented in the drilling and production disciplines. Gazprom Neft, in its turn, pays high attention to the efficient development tools and implements a large number of digital initiatives in the geology and reservoir management areas, digitalizing the systematic view of the oil engineering processes.

Back to the list