Previous webinars
Virtual Tech day - Data Science and Python
261 views
Welcome to Cegal's Virtual Tech Day - Data Science and Python
16 November | 09:00 AM CET
Discover how these dynamic tools are reshaping industries, empowering innovation, and transforming the way we work.
Our Virtual Tech Day offers an exiting agenda and a great lineup of speakers! You can pick and choose the sessions that interest you the most and watch them from the comfort of your own home or at the office.
Don’t miss the opportunity to learn from experts and get inspired by tomorrow’s possibilities.
View transcript
Hello everyone, and welcome to this Virtual Tech Day. As you can see, today I even got my own phone. So for those of you have haven't seen In Between Two Ferns, I recommend to have a look at that and you will catch what I'm saying. I'm extremely happy to be here today for this third virtual tech day, focusing on data science and Python. What we're aiming for in all these sessions is to show how the industry now, the energy industry is creating more value by bringing machine learning, artificial intelligence and Python to the end users. I will come back to the different presenters in a while, but before I do that, I wrote the first introduction speech to this sessions in 2019 or 2020, and I was talking about me as a geoscientist looking into how the environment were changing. We saw that I was then coming, and if we look today and if I look back only those few years about what's happening, it was no ChatGPT. We knew that artificial intelligence was very important, and there were scientists working a lot with it. But the adoption we have seen over the last years has been instrumental and fantastic. So what we see now on top of this, together with the computing power that also is growing with. Incremental speed as well. We have a complete new way of looking at our data. And of course, it's important to have good data that we can trust. And at the end of the day, what we're aiming for is to make better decisions. We continue growing the number of participants to this seminar and that is fantastic. I think we can argue now and say that this is a truly global event. If you see on the map I have on the screen here, you can see that we have participants and we have presenters from several parts of the world. Some are up very early, and maybe some of you guys are sitting having an afternoon coffee and hopefully looking at this. We have participants from a range of businesses we have from the traditional oil and gas companies, both the super majors and the internationals and the smaller players. We have the national energy companies. We have power and utilities companies. We have consultants and service companies. And also I am extremely happy to see that students are picking up on. The joining as well. Hopefully we can also support universities in later tech days and introduce some students and what they're doing to these setups. A very warm welcome to everyone. I hope that we can keep these sessions as open as possible. And open and virtual is obviously require some rules. So feel free to add all your questions into the chat all the way through the whole day. And if there are time at the end of some of the presentations, we will ask the questions then. And if it's not time, I will make sure that the presenters are getting the questions and we can follow up on it later. A few words about the company. I'm representing Cegal. We like to say that we are a global technology company delivering digital solutions specialized for the energy business. We are near 900 employees globally in nine countries and 24 cities. We have a true global footprint and we have about 700 clients worldwide. We have a revenue of about 170 million USD, and I've been in quite a few startups over time. And I used to say that above 50 was dangerous. We're 900 now, but we have the potential and we have the size now to really move in different directions. And we have since since 2017, 2018 had data science group. And we are constantly investing into this and we can continue investing into it with the size of our company. So this flexibility and adoption is a very important key for us. We as you can see here, we are what we call a global tech powerhouse specialized in energy. And if you look back like I have 25 plus experience in the industry, we see this dramatic change in focus in most energy companies over time. Now we see that what usually was oil or gas company now has changed names. There are KPIs for carbon footprint. And we also see that reporting will be for the carbon footprints will be as important in the future as we see the financial reporting for all these companies. So there's a huge change of focus, which I think is fantastic towards a more sustainable future. At the same time, we all understand that the traditional business is still important and will be there for the future. And what we are trying with circle is to be the link or that transition partner with the different companies. That means that we are now focusing on new energy, like power and utilities that we will talk about today in the presentations hydropower, wind, and also close to the heart of a geologist, carbon capture and storage. And the storage part is something that we have spent quite a lot of time on over the last year, because that we see that our technology is easily adapted into that industry. We have a we have a fantastic agenda today for around the world, a very warm welcome to all the presenters and especially to the extent external presenters from Adnoc, Aker BP, Aneel and the Norwegian Geotechnical Institute. I'm very much forward looking to listening to each one of your presentations. If we briefly look at the different presentations, we are starting with the keynote from Dr. Luigi Sabatini, which will look at how the analytics for the digital or how the how that we are using analytics for the digital transformation in the energy business. Then we will dive into AI. And a friend of mine talking about how AI is empowered in the power and utility business. Before we have a discussion between Microsoft and Cegal on how AI Innovation Office, a tool from Microsoft, is driving innovation through partnerships, I think partnership is a key to all these new ways of moving forward, meaning that it's a key thing that we work together, whether that's towards renewables, whether it's towards technologies in artificial intelligence or other areas. Working together makes us better. And this is something that I have always practiced sharing information that can be shared to make one another better. And this is part of this discussion we will have then we have the ethics, which is a very challenging area that we will have a presentation around before we move into a few sessions. On the more practical side of of using AI Python together with different applications. That is from MGI, it's from orchid BP, and it's how we use AI and large language models to and how this is changing the energy business. Finally, we will have a very hands on presentation by Adnoc on Python scripts in the petroleum engineering domain. You are free to sit through everything. Of course, we love to have you here, but you can also focus on the presentations you would like to cherry pick. If you don't have time to listen to all the presentations today, you will have access to them through the subscription you just have made being in this presentation. So without any further. Talk, I am very happy to see you Luigi from your subsurface excellent center in in Adnoc we I was down in Abu Dhabi I think a couple of month ago now visiting you. And there's a lot of exciting things happening within Adnoc on both, both on the exploration, but very much on the production side as well, where you are reservoir management expert advisor and digital transformation leader. You're going to talk to us in this keynote, speak about the analytics for the digital transformation in the energy business. If we talk a little bit about your background, I am very happy to see this combination of the commercial and academics that you have been very supportive of over through your whole career. You have a PhD from the University of Houston. I was actually lucky enough to be to a football game, a college game in the University of Houston a few years ago, and that was a fantastic opportunity to be there. You have been in the industry for 32 years and contributing both on the practical work that you're doing on the day to day basis, but you have also been very much involved in the Society of Petroleum Engineers to speed with a number of high ranking positions there. And also what I really appreciate, and I'm really looking forward to this, is that you are also constantly publishing papers. I saw a number over 100 papers and five books, so I am very happy to have you here. Luigi and Peel. Feel free to share your screen when you're ready to start. Yes. Think it's being shared right now? Yeah, let me know. I can't see the screen. It's correct, it's correct. Yeah. Okay. Excellent. Thank you for your introduction, Arne. It's a pleasure for me to be here today in this wonderful day. This is the. It's been great, a great initiative over the past years where industry scientists in the area of geoscience and and now moving to other areas of industries are sharing knowledge about using artificial intelligence and digital solutions. So I have a few minutes to introduce you the topic on the digital transformation and all the trends that we're seeing globally. The first part that I would like to talk is a little bit of setting the scene of today's talks. First of all, we all recognize that we work in very remote and hazard locations where at least the people that we work with. Yeah, for a long time we've been sitting in the office, but the focus has been to more and more to reduce the the effort or the requirements for people to be in the field. On the average, we have seen throughout the years that digital transformation and new applications have given the opportunity to reduce up to 50% the capacity requirements of people staying offshore or in the field. And on another topic and not being in with too much detail, we. We have been facing and organizing the the new energy transition where we we have been told to be more resilient and to contribute to zero emissions goals that the humanity has for 2030 and 2050. We know that the energy business, particularly hydrocarbon, will continue to play an important role, although a slower demand, but continue to be growing demand over the next two decades. As per several operators. An article from Maxim mobile cites that we will need about $21 trillion by 2040 just to maintain the the this reduced demand from hydrocarbons. So imagine all these different opportunities that we have in order to create value by reducing and optimizing number of wells, facility sizing and other CapEx requirements. And the other big opportunity, if we look at around the energy and digital technologies implementation, we have seen that throughout the years seismic processing, horizontal drilling, fracking, they have taken a long time. But what we realize also is that these transformations, these adoptions, are taking every time less and less time. With this, I'd like to introduce just a topic in general. So what do we call analytics and digitization for the whole realm of energy transition. So it's just a collection of tools of math algorithmic from language processing, analytics, knowledge based system search and optimization, in order to allow us to do a few number of use cases that tremendously improve the capability of the geoscientists. For example, pattern notification in both images and time series data knowledge discoveries in deep reconstruction of images. Automated image interpretation. All these stacks comes with a collection and variety of use cases, which some of them are going to be used here. The overall intention is to take us towards a reduced decision cycle, for over 20% has been demonstrated in many use cases. Maximize the use of energy. Reduce the unit operating cost, make a smarter and faster decision, and of course, help us to take decisions for more sustainability and less emissions. I'd like to set up, in general, what should be the vision of the energy asset of the future. Just to keep in context of where all these toolkits should take us. So from left to right, we have seen that the overall world of sensing and instrumentation, which provides a great visibility to all the wealth and or all the working sites of the energy producing business. The use of seismic in a time lapse and magnetics is bringing tremendous opportunities for data simulation and discovery of more physical phenomena beyond the well measurements, integrity and flow assurance, predictive sensing, and also inspection without interruption. So all of these are opportunities to reduce the cycle for analysis and interpretation. In the middle part we see two silos. One is the integrated the workflows from field to office, which is very much using the data that we acquired into our physics and integrated models. The more the capabilities to provide more and more assimilation of this data into these big models. The third pillar is about using AI and artificial intelligence for proactive event detection and opportunity generation. With so much over volume of data we will have, we will need more eyes to look at it. So therefore we need more techniques in order to do assisted surveillance by exception. Self adaptive predictive diagnosis. Auto continues and self-learning of autonomous event detection and automated opportunity generation system. And at last, but not least, once we create these opportunities and and rank them and provide them in the proper business plan. We need to be able to act upon the assets. That's why the inclusion of so many technologies such as robotic actuation, collaborative environment, unmanned remote operating valves, et cetera is giving us the opportunity to have these more efficient oil and gas and and oil side. The value has been identified from many areas. Digitization, according to several articles for the oil industry, could produce $100 billion, which is, as we say, they are, in less than 1% of the overall CapEx investment that we can we have to achieve by 2040. But it's also is not only the the the overall price, which is, you know, we can go and see a lot of headlines, a lot of 5%, 10% here and there. But we know that in reality the application of these technologies it's a never ending task. It requires a lot of efforts required continues. So the key use cases I'd like to highlight here are relating more about surveillance by exception, allowing us to do optimize logistics and fill actions. So this one generates a lot of opportunities by maximizing the time that the that the the maximizing the efficiency that the operator and the works happens in the in the field. Optimizing artificial lift is one of the key use cases for oil and gas, because as we move into mature Adas, we are going to need more and more. Artificial is both gas lift and pumping, and this one are data intensive processes which require continuous surveillance and of session water flow optimization, similar to artificial lift is one of the key processes in order to achieve high recoveries and. But monitoring is expensive, and including and assimilating the monitoring data into the models is also a quite human intensive process. So by providing with digital and and artificial intelligence, we can speed up these processes in order to enhance the capabilities of monitoring and surveillance, and also to target the recovery factor increase. Asset Management. This is a full and realm of solutions that are common outside the whole industry. By providing continuous predictive analytics to the equipment, rotating, rotating equipment and also all static equipment in the processes. This one continuously provide advice on updating the optimum operating envelope of the equipment, which, when translated into the integrated asset model, can provide production optimization options and well enhancing optimum candidates can provide between 10 to 15% CapEx reduction to achieving the same amount of wells and reservoir development optimization with the optimized -CapEx development process. -I know. So the first one in here is the. I'd like to see how does the oil industry is implementing all these use cases in the so-called multi scale realization scenario. We go from the from left to right into the first decision making process, which is the minute by minute, which happens on the well and which real time data is coming to update and calibrate continuously the wells. We we automate using artificial intelligence, analytics and and other tools in order to provide continuous well surveillance, operating envelope, etcetera. Similarly, we can do in the slower decision cycle, which is the integrated asset model or the asset management stage where we acquire a location data and continuous data in order to update our model, do asset management and perform short term production optimization. And in the longer term, we use the data as well, coming from multiple sources in order to update our model, create a history match consistently with the geo model, and provide mid term and long term decision scenarios. All of these we have called we call them today Digital twins. But in reality we have been using this concept for many years. But with they use introduction of continuously available data. With the use of more variety of data, we can make this more a reality. And and the different. So moving forward. So building the solution is only the the first part. A we can build a digital solution or an analytic solution which is represented by this small cube that is called say with the letter sigma. However the digital blueprint covers a little bit more than that. We need continuous access data of all the multiple sources. We need to transform this data using ETL and cleansing and validation and data wrangling. We build the model, and then we need to embed this model into a collaborative workflow, which could be as as important as an engineering workflow or geologic workflow. Or it could be also an approval cycle workflow. And then on top we see the integrated visualization, which enables collaboration along all the different organizations in the company. Again, the journey matters. We have globally a large number of gaps and actions to close this gap from the strategic data, people, process and technology point of view. I'd like to highlight few elements which could give us and give us the success criteria of all this effort. The first one is for top leadership to communicate the vision, to communicate the sense of urgency, and to promote alignment among different business units. This one is not only having a visionary, but also to having a balance with the proper project execution. So having the top management stewardship is very key for generating digital transformation. The second element is data. Having a robust data foundation. I mean this is something that we can be here talking a lot. And you know very well the topic. The third success element is about the knowledge that exists today among different SMEs between AI and data science. So the the key success factor is to ensure the change management strategy by providing the right training and the right element of change management, by embedding and promoting alignment between a digital skill and a upstream skill. The fourth element is process. So process that allow us to continuously innovate, but at the same time having project management discipline. So having the proper ability to measure, to implement new things, measuring the risk and cost effective. The fifth element is technology having the flexibility to adopt and innovate and also having the discipline to align. So this is a very long topic to talk about the gaps and the and the actions. And this is in general, I'm trying to summarize the large number of elements that we need to consider in digital transformation. And my last topics is is about covering the the need and the importance of linking the digital technologies with the traditional geoscientists and engineering background. So applying math, building algorithms and learning how to automate process, applying analytics and learning how to twist the data from augmenting, pivoting, transforming, data wrangling, etcetera all the way to visualization. All of these are skills that we don't learn directly into the. A petroleum or geoscience engineering domain, but they are very important to create and nurture the new professional of the future, which will allow a smooth digital transformation in the energy business. And with this, I would like to thank you, the organizers of this meeting. I'm ready to answer any -questions. -Thank you very much, Luigi, for a very interesting overview of this transition we're going through. And I see that. I see that there are multiple. Multiple questions I like to ask, but I think the first and since I know your your academic skills and the focus on the academic side of things. One thing I see at least being, well, we're probably about the same age working in working in the industry like we have. Obviously you're always interested in learning new technology, understanding new technology, but the challenge I have seen at least is from my perspective, from the cold point of view, I'm struggling with actually being able to do the coding itself, but I think I'm okay, at least that understanding whether the data is okay or not. So this cooperation between universities and and research like you're sitting and and the practical users. What's your experience in the best way of training people like me that understand the result. But I need to be explained sort of the technical story through this, to the point of actually being able to use the data. How can we easier give the data to people like -me so I can use them? -Well, guess that one of the best example is to start with the things that we know best and think, for example, having the ability to play with the data that we know best for. I'm sure you are an experienced geoscientist, knowing how what the ins and outs of the models are. But if you give you an additional one extra tool to do what you did before with the traditional methods, think you will be able to scale up little by little and think, this is what we have done. For example, before we were using normal models and and manual running of scenarios with the same model. Today we can anybody can build a workflow inside the commercial tools. And also more and more the data inside these models are liberated in a way that I can use a Python code, a Visual Basic code, and it requires the a little by little understanding of the of these complex tools. Of course, we're not talking about that. I maybe I will not be able to replace today a young engineer expert in programing and data science, but should be able to communicate better -with him. -On. Yeah. But I think this is this is something that will come more and more. And I had in my I think this combination and the young, young skills together with the old grumpy skills is a very key of actually making this transition successful. Yes, I know there's one question in the chat as well. Could you please I'm just going to log on to the chat in a second. And then we can, can, can can have a look at that one as well. Because I think it's a very interesting issue that we are looking into. Yeah. Here it is. What, what what is the missing thing from the vendor space to help new professionals succeed in these areas? Well, I guess what they're asking about here is do you feel that the companies like, like Adnoc and other national and international oil companies are working well with the vendors in these trying to establish these new workflows you just mentioned, from management to data and this transition all the way through. And do you see any improvements from both sides on how these things can be? Do I think this cooperation side is a key thing, but from your perspective, what do you see as the most important things to -succeed in this transition? -Right. His success factor is that any initiative that it is generating new can be embedded into the normal processes of the day to day to make it a useful in that one also includes that the data that we generate, the new data, the new interpretations, the new can also be consumed by other applications and other tools. So this one creates a very. So when applications and vendor relationships are based on, you know, on on closing this access, then it generates or discourages the use of. Leveraging this data for other processes. But I've seen that. Yeah. That mostly with this new appearance of standards. The intention is that one that to use more of the tool, create reusable components to embed this data into other tools that is able to actually create more and more -value. Yeah, I think. -And I guess also the familiar familiarity to existing tools, adopting new technology into existing tools, as you say, is the key thing that will be important to to to drive this forward. Yes. Okay. Thank you very much for your time, Luigi. It was really nice and I'm looking forward coming down, visiting your Center of Excellence again very soon. And well, we will now take a break for to set up the next session. I the next the next session will start in in 15 minutes. So feel your feel free to fill up your coffee cups and do some emails before we move into audio. Speak or talk about using AI in the power and utility business. So looking forward to the next session in 15 minutes and looking forward to that. Thank you. I do apologize for a small delay. I had to water my phone, but now it's watered and we're ready to move on. I'm very happy to have our next speaker with us today. All the recognition, and I think it's difficult to find a title that fits better to today's presentations. Or Derek as a chief AI officer. -Everything's fine. Intro. Lime. -Yes it is. Sunny Mae made a lot of problems for my presentation though. So we met way back on the Eric and I think it was 2005 six maybe, or something like that. We did a project together, which was early day adaptation or pattern recognition and the early days of machine learning and AI, which was fantastic and something we learned a lot from. Today you are, as I mentioned, the chief AI officer in Unaoil, and you are also and I think that's a key for all these presentations, have kept your relationship to your university, the Norwegian University of Science and Technology in Trondheim, which I think is a very important thing in this combination and this link between academics and commercial and binding these gaps, is something I find as very, very interesting. You have worked with applied AI and the AI industry since 2006, and I'm really looking forward now listening to you, because I know that you are a very interesting and fascinating people to listen to. So no pressure and please share your screen and we're ready to, to to start the -presentation. -Thank you so very much for a very nice introduction. And as you said, we we go a long way back. And today I'm going to talk to you a bit about what we're doing now in an EO. And many of you might not have heard about on the island. And there are reasons for that. And and one of the things we do, Nio, we have a strategic focus on AI. We think that is a competitive advantage for us as a renewable energy company. And I'm the chief officer, and I want to present to you some of the projects that we are working on. Uh, and when I say working on projects, it's because these are actually products that we, we are developing all the time. They are deployed commercially, some of them. And I hope you'll enjoy this. First, let me present an EO. So anyhow, we are a renewable energy company. We are, uh, we're a startup. We're one year old and we became aneel after 230. That local renewable energy company sort of, took parts of the people 90% or thereabouts and move them into neo. And then high tech vision and investment firm came with lots and lots of money in order for us to to reach our strategy. And our strategy is to become an important Nordic player in renewable energy. And in 2030. So today we are the second largest wind producer in Norway. We have one of the leading environments for energy trading and renewable energy. And we own parts of the uh, hydro production operated by Trans Energy. And we have ownership in biogas as well. Also, in addition to being a producer, a large scale producer of energy, we're in the sort of downstream business and energy efficiency working. Lots of that. And we have five startups that we are. Putting quite a lot of energy into to making work. And first one is Neo Mobility. It was the first we made. And 2019 I think, and this is charting events for condominiums and businesses. So parking lots, we have many parking places and you have EVs that need to charge. We build this infrastructure infrastructure and we charge for you for retail stores. We are um, we are doing energy efficiency. I provide energy services so that you use less energy in your retail store and you build we make greener construction site by putting batteries on site and optimizing the charging of your utilities, excavators and diggers or whatever you have. And then we have an industry that provides sustainable energy services, uh, for process industry and near real estate, who builds solar energy, uh, for commercial buildings. A bit about us that and well, so why is neo uh, has this strategic focus on AI? It is. Not a surprise really, because the complexity of the energy market is increasing. So in Norway we're integrating with Europe. So we get a lot of we can sell energy out of Norway and get energy in to Norway. Um, and of course, part of the energy that we get into Norway is, is the renewable energy wind farms producing maybe in Denmark or Spain or solar farms in Spain. And also we have a huge growth here in Norway, not only in, in, in Europe. So and of course, the renewable energy that is growing is solar and wind mostly. And these are variable sources. You cannot really plan when they produce. They produce when the weather tells you to produce, when it's wind, when it's sun. And of course we have shorter reaction times because we introduce the renewable energy we need to react to changes in wind and so on. The sun, uh, the weather change according to the weather. And that change happens fast. And in Norway we've been very lucky and had hydro energy for, for for very long now 100 years. And most of our energy is produced by hydro energy, however, and that is very nice because hydro, you can just say, oh, I'm producing this much energy as a nuclear plant. But. Wind and solar is not like that. Therefore, we need automation and if you are going to automate, you should do it smart. And therefore we are uh, we are doing we we have this AI department that I'm reading. And so I want to tell you about three different projects we are working on. Uh, first of all, we are training energy, all the, uh, all the wind power that is produced by our power plants. Uh. They are traded in the market by our AI bots, algorithmic trading bots. And in order to do that, we need to have an idea about how much energy that we are going to produce in wind. And this means that we need to forecast the energy production for all our power plants. And this is this is not very easy. And you might understand why. Because the weather forecasts are quite uncertain. They become more and more certain because we are at a point in time that we're forecasting. But the longer way it is, the harder it is to to forecast here and especially for wind. So in general, the amount of wind that will blow if you're through your neighborhood on a day that is purely the volume of wind is fairly over a day. It's fairly easy to to prognosis. But exactly when it comes and when it changes, that is really, really hard to to to predict. And this is a uncertainties in the weather forecasts of course. And but we can improve our forecasts by using machine learning algorithms to do this. And we do that for all the wind power power plants that we own ourselves. But also we sell this as a service to third parties so that they can buy our services. And when we have these forecasts, we need to trade them in the market and not different markets. The hard thing about and renewable engine power, of course, is that it has to be used at the same time when it's produced. So it means that you need to balance the amount of production with the amount of consumption in real time, and it has to be equal amount of both, if not, you might ruin the grids and all appliances that are connected to the grid if you have huge changes in the frequency of the grid. So so therefore you need at all times to balance this. And that is why it's so hard when you have this variable energy sources such as wind and solar in your grid, because it changes so fast. And for solar, even there might be just a small, basically cloudless sky, except for one cloud that happens to be in front of the sun over your power plant, and suddenly the production will drop with a huge amount. And that is almost important to to to know in advance. And therefore predictions become very important and better predictions become even more important. And the thing with, with the, the, the engine market is that it's not traded energy, it's not traded in real time. It's trade traded the day ahead. And then during the day until production hour and during production are you have sort of promised in contracts how much energy that you are going to produce. And and of course, you don't know whether that's true or not until the production error is over. Uh, which is really unfortunate because it's the we we might think because we predicted that we are going to produce a certain amount of energy, but then we're not. And but of course, as I said, the closer you get to production error, the more certain you are and the better your predictions will be. And therefore we have these wind forecasts. We also have solar forecasts. And we also have um, consumption forecasts for all these utilities, EV charging that we're doing and other assets for, for which we are. Balancing responsible party trading energy for some some third party assets. And we do this in all the different markets. And first we sell it in the intraday market. But then during the day we get more information and then we can sell it in something called intraday market which is every hour, every for every hour, every day you can sort of update your prognosis and and buy and sell in the intraday market. And we do this. We update our expectations for our production or for all different assets. And then we trade automatically in the intraday market with algo traders that are handling all our wind energy. So all energy is traded both in the day ahead and intraday market by uh by this. All traders. We also. Trade hydro in these markets as well. Using some smart optimization algorithms. And these are also updated during the day because you might be reasons that you even though it's easier and much easier to plan hydro, there might be some imbalances there as well for different reasons. And we trade these imbalances as well. And if we see that there are some good traders, we can change our plans as well. So. So this is an important part of what we do. And when forecasting was one of the first things we started with. And 2018 when the department was was started in practice. And. And we've done that for five years now in production. Every day, all day. Um, so this is one of the things we've, we really experienced in another thing that we are doing. And. In the department that we have deployed for our wind parks is predictive maintenance. So we have systems now that are monitoring the condition of all our wind turbines or wind turbines that we we own and operate and maintain. We we monitor them in real time, and then we warn about problems that might happen in the future. And this system that we've developed, it is it has been deployed for nearly two years now. And the system is called Argus. Uh, and we have made some, some case studies to show the value of this system. And I'm going to show one here for you. And, uh, here is the, uh, it's the case, uh, a year ago or so where our system in February warned about that. We might have trouble some trouble with a gearbox. Uh, and, uh, of course, this alarm was triggered. Uh, but we wanted to see. We wanted to test. Okay. Can we trust this alarm? And then we saw over time, over 21 weeks, that the alarm signal became even stronger, stronger and stronger. And we we knew that this disturbance was going to to break fairly soon. And suddenly, um, an alarm from our OEM system was triggered and they stopped the, the, um, the wind turbine for repair. So and then we had around ten days of downtime before the wind turbine was repaired. And of course, if we. Uh, had. Taken this alarm in February into account. We could have started planning to repair this turbine because we knew that it would be a failing Ferguson. So we could have had all the replacement part in place. We could have sort of just planned to even repair when it fitted us the best one that we would like to, to change the parts. Uh, but of course, this was a we have the service agreement right now. So this was the, our OEM that's that did this and and then we got this downtime. But if we had for our alarm we could have planned this. And when this, uh, turbine failed, we could have just, uh. Climbed up in the ladder and made the changes immediately. And also. Another thing is that we could have decided on which time of the day. We could have said, okay, we know that we'll have a problem in one week, maybe. And we're but we see now that this week there is little wind. So we should make the changes today. Or we could even say the day we have decided that we are going to repair, we could say, oh, uh, uh, around noon there will be high wind, high production. Price is high. So we should probably not repair them. So we delay for a couple of hours when the prices are reduced. And then we could produce when the prices are high and the prices are high when the system needs more energy. So we should provide energy then. Of course. So so this is a use case showing the value 21 weeks warning we had before this problem, uh happened and we could have reduced maybe the downtime from 10 to 2 days, maybe even less. So we could reduce the downtime with, uh, with eight days, which is a huge amount, 80% of the downtime we could have removed. Uh, and the system just wanted to to tell you a bit about how it works. So there is a system. We have a machine learning agent for each turbine that is monitoring the turbine. And, uh, for a specific part, this, um, this case, the, the gearbox. And we, we look at the trainer, this, um, this neural network to know, given the, given the wind strength and, and the conditions. Now, this is how much I should produce the next second or the next 10s some time period. And then we train this model to know exactly based on which inputs there are the wind strength and so on. It should produce a we expect to produce certain amount of energy, and then we can look at the difference between our expectations and what happens. And then we'll see if the difference increases. Uh, we, we measure this. The difference that is sort of the, the, the white graph, you see, they're varying quite a lot. Uh, and then we takes away mean of this or average. It's over some time because we don't want that huge variance. And then when we made this prediction plot this, we sort of fit a curve here. That is the, uh, the lilac line here, purple line that you see there. And we fit this line using something called swarm optimization. And this line, as you can see here at the end there is no measurements, but the lines continues anyhow because we fitted it to the whole, um, to the whole all the, the parameters, the blue parameters ahead there. And then we see when this line crosses the red line on the top, then this is our expectation for when our turbine will fail and we need to repair. Uh, and then when we know this, uh, we can, um. We can plan our maintenance. So so this is really cool. It is deployed out there. We the for all parks. We it is we are doing. This is not only used for real time measurements, is also used for error analysis and to report the conditions of the wind park every year to the owners of the wind park. We are not the only owner of these wind parks, but we can provide them a detailed overview of the condition of all wind turbines. Wanted to to talk a bit about the local flexibility market that we are operating now. So there are some areas in the city where you wear these two little energy. So you might need to build out the grid so that you can get more energy into the market. This is into this area. However this is really expensive. So instead we try to have local production and to trade the flexibility in the in the area so that you don't need to heat your water. Right now you can delay for an hour and you do that. We pay you money for that. And this is the project. It's the first in Europe where we have done local trades in this area between different buildings, between different assets. Um, we have um, charging of EVs, we have solar and we have eight systems that sort of can change their behavior because the energy need tells us that we need to do this. We do this in order to reduce the demand in this area so that the the grid operator tensile in this case don't have to build out the area. This is a really, really complex system. Not only do we have to forecast all the assets here that you see we have here, we have to forecast the the expected consumption of this non flexible assets. And then we have to optimize these assets according to what we think and provide the the flexibility window or the potential flexibility we have in this area to in the market and see, oh we can make these changes in order to to move this this high loads during some hours. And we can also we send this to the grid operator, the network distribution. So the net owner so that they can say oh we need to buy flexibility from you now and and reduce the consumption in the era. Total load in there. Uh, this is, this project has been five years now and it's just finished, as said, first of its kind in Europe or really cool stuff partnered with you, of course. And and Tony along with 30 others around the Europe. And there were several cities that want to do this. We were the only one who did it. Just want to show this. In order to do all the things you need an departments this. That is what we have. We are total 19 employees. Uh, Airbus has PhDs, four of us are PhD students. But we are not a research department. We are a deploy AI solution departments. But of course, we have a very close collaboration with anything you and university and region open air lab and Norway AI Research Center so that we are helping educating students. And we have students working with us in addition to to to us who are employed here. So, um, so we think this is our close location that we're sharing basically location with university and this close collaboration. This is why we can do this, things that we are really proud of, that we're doing. So that was my overview of what we do in and of AI and in Aneel. Thank you very much. Oh, Eric, this is it's always encouraging and very exciting to hear about you with all your energy and all your ideas. And I just see, since last time we spoke about these things, how much further you have come on on multiple areas. It's this with, with predictive maintenance is something that I have at least seen a lot of value in. It's been talked about for a long time when it comes to pumps in the oil and gas industry. And enabling that in a good way is something that will be very important when it comes to maintenance and saving money on these pumps, and whether it's windmills or it's hydropower or whatever it is, this, this, I think this will be a key for the future. Being able to predict this in a good way, like you just showed. I know there's a few questions. Unfortunately, we are a little bit out of time, but it was fantastic to to to see you. And I know that we will cross roads very soon again. So thank you very much and I will share questions with you, with you afterwards. Take a. Russia. It was a pleasure being here. Thank you. So moving on to the next session will be more of a discussion. I am very happy to have a few a colleague and. A friend from from Microsoft in this next session. We have called that session the AI Innovation Office. Diving Innovation or Driving Innovation through Partnership. And Aspen Partnership is a key thing. And you see now how this AI and how that is really defining very rapidly how we will work in the future. I think it's very important that we work together. We work close in these partnerships in order to succeed. And especially now looking at this intersection between the AI in the industry and energy is very exciting. And I know that you are very keen on that. As I mentioned just now, we talk about predictive maintenance. We talk about like Luigi was talking about earlier today, about optimizing production and understanding production of oil and gas better and enhancing really the operation, the efficiency and reshaping how we produce, how we distribute and not to forget consume energy. AI has a big picture there. And this about partnership is a very key thing. And we have collaborations with partners across the globe actually on, on, on this area. And we have, especially over the last year, built a very close relationship with Microsoft. And lately now also two words AI and I innovation, which will be the main discussion here today. So as we embark into this exploration of R roles in the energy sector, I'm really glad to have Swaroop Joshi from Microsoft with us today together with Espen here. So since you're in the studio, Espen, why don't you say a few words about yourself? Yes. Hello, my name is Espen Knutsen. I work in maybe the smallest unit in Segal, the CTO, CTO office as an advisor within digitalization, AI and innovation. And what I'm currently working on is really bringing together the company around our AI initiatives and see how we can help our clients succeed with everything that's going on in the industry. And I'm very excited to be here. Also, between your firm here today, where where we're going to have a good discussion with Microsoft, I hope. And what are you on the line on the other side of the world? Man I am. Thank you so much for having me today. Indeed. Absolutely. You nailed that one. Thank you, Espen and Arne, for having me on on the on the session here today. And fantastic to have a chance to, I think, speak about something, you know, really exciting the partnership between Microsoft and Segal. You know, we've had a really strong last little while and I think there's plenty more to come. So looking forward to the talk. Just a quick introduction about myself. Saurabh Joshi based in Melbourne, Australia. I'm an energy industry advisor with Microsoft, so I work with many of our strategic clients, mainly in the Asia region, but a little bit around the world as well, and certainly partner with, with, with Cegal and the good folks on this call in doing that -work. So that's my intro. -Hmm. So artificial intelligence tasks, as we have talked about a few times here in Aspen, really revolutionized the energy sector. We heard, as I said earlier, the talks about smart grids and windmills in the in the optimizing energy distribution to reduce power outage. We see, as I mentioned, also the energy when it comes to production, when it comes to optimized well positioning, we see AI and unstructured data used together and also renewable sources is becoming more and more important. But as I said, this is changing really fast. It's really fast. These changes from one week to another, you see huge changes in the in the industry and how things are working. So it's important to have a place to test these hypothesis. And I know that that's one of your key things we need to test thing I come here so let's have a look at how it works and what's your thoughts around this and what's important being able to do -these things and how can it be done. -Yeah. Thank you for that question. I kind of if you reflect back from maybe the 20 tens, we see that there's been a big focus on on digitalizing work processes. And we saw very often that that led into very big initiatives in the oil and gas industry. But what it lacked was really a place where you could experiment and where you could have the IT side of the organization helping you with solving complex problems. So back. So that is what we are then looking to, to, to help our clients with moving even faster in this space. Back in 2016, we started the the data science journey for for single. And already then we realized that the complexity of all the Python libraries and everything that was needed to be set up, and with all the data sources we needed to access and with all the people that has to be involved, we really need to to to cater for that. And we've launched something we call this innovation space to to help with that. And pointing back to Luigi's keynote this morning, I think this is exactly one of the answers that the new professionals in the in the marketplace are looking for when they have to, to, to test out new technology. There was an old saying in the 90s from Microsoft, seeing is believing and the quicker you can see it, the quicker you can agree on things and then you can move ahead. So providing this -agility and the organization. -So it's coming this slide by the way. -So the slide is coming. Yeah. -Okay. So we're waiting a little bit. So. So we are. So if you move ahead a little bit. So what can we do in such an innovation space. Yeah. We can sit down with the with the client. We can do co-creation and we can we can move faster together. And our, our realization is really that. We are seeing great enhancements lately with, with AI as well. So this means we see that from, from maybe a year ago. The changes we've seen in the marketplace and in the technology, it's moving so quickly. So if you start to implement something now, you will have technical debt in your solution very quickly. So this is why it's key to have an environment like this innovation space where you can start to experiment with the latest and greatest stuff in the marketplace. And Microsoft is really at the forefront of these things. And if you take the if you can take the next slide, please. So you probably all heard about copilot. And copilot is going to revolutionize the way we work with data and work with applications. And a good example is the core of a Copilot is something Microsoft calls the semantic kernel. And the point I'm trying to convey here is really these things are typically open sourced. They are put there for everyone's use. And by doing that we need a place to, to test it to to work with it. And this semantic kernel can allows you to, to build custom solution in a place like the innovation space, we can test and validate whether it does a good job for the organization. Mm hmm. Thank you very much, Aspen. This is very interesting to us. An intro to these these discussions. Swaroop, you are in one of the largest companies in the world. You have obviously you I believe you're talking to to quite a few people around the world, but how do you see the energy industry is really organizing the response to these advanced, i.e. how how are we handling it as, as companies? And what do you see as key elements for the success of this? Yeah. Yeah, great. Great question. I think when, when I and we as Microsoft think about innovation and AI and in working with clients, we think sort of understanding that client mindset and organizational stance is really critical. And we recognize that and we work with it. So we see some categories emerging in how organizations are embracing the new wave of AI. Many of the things that Eastman talked about. So three three of the categories that that start to come about. Number one, I'd sort of say, is a broad and deep leaning organizations, in a sense, are going going all in. They see that as an opportunity to differentiate. And they want to, you know, move at pace, you know, fantastic to work with. It gets us involved with some really cutting edge capabilities. And and you see several successes there. The second category I'd talk about is what I'd say is is very targeted adoption. So organizations that are taking a very studied and nuanced approach, you could say, finding specific capabilities in their company that they want to improve and selecting the technology that's going to going to have an impact there. So again, an interesting approach that that you see. And then finally there's the third category, right, which are called the wait and see. And that's perfectly fine. And that's a good fit for some for some organizations as well. So different approaches. And you know horses for courses in how we work with them. And I think the horses for courses is a very good thing here. You there's lots of great innovation, but it needs to have a purpose. And you need to have an understanding of what you would like to achieve with it, especially when it comes to commercial innovation. But it's been you are definitely one. Aspen always wants to try out new things, and very innovative in the way is thinking towards different companies we talk and work with, and this with innovation, with clients. That interaction is a key thing. It's important to understand how you work with the clients. And as you said, I said what they would like to achieve. But can you reflect a little bit around sort of how can we provide that value to the clients and what's important to clients when you do these projects? I think being close to the business is the really the good and most important thing here. We see that there's a tendency, tendency sometimes to to create POCs where you get kind of an order or what you want to do, then you go away and try to fix it, and then you come back and it doesn't really fit the idea. So what we've done over many years is to maybe go closer with the client, start to learn the business processes, even stay at or just be at the client's office and then start to get that trust and also that flexibility. We need to to work faster together. And and with the the agile craze through the 20 tens, we see also the businesses adopting that way of working together. So I think that that allows us to get quicker results to, to fail fast and, and really become a partner rather than just a supplier. So, Swarup, I would like to ask you a question. So how does organizations innovate around AI? Do you have some examples and some insights you want to share? Yeah. Yeah, absolutely. Let me let me dive into that a bit. Espin. Thank you. Um, so again multiple ways for energy organizations to innovate with AI. Right. So firstly I'd sort of call out plenty of, you know, generally available offerings. What's out there in the market. You know, Microsoft ourselves earlier today at the ignite conference made some exciting announcements for and the vision capability. And that was one I was particularly excited by. But anyway, the sort of generally available offerings from companies like micro. Soft or a multitude of, you know, ISV providers that focus on the energy industry. That's that's one, one approach. We also see organizations building solutions themselves on a platform, frankly. And and that's that's fantastic too. You know, tailored needs, tailored tailored approaches. And then finally, we offer an alternative for some of our most innovative customers, which is the opportunity to innovate with with Microsoft and Cegal increasingly in our partnership. And we'll be focusing on that third aspect in our conversation from here. Just to ask you, yesterday we saw Satya Nadella on the Microsoft Ignite, as you referred to, and he talked about three waves of of kind of innovation around AI. And the last wave was around industrial usage of AI. Do you have some reflections on that? Because I think that is what we're here to talk about today is that how can we use kind of the consumer or the academia based solutions, and how can we put that into -practice? -Yeah. So we'll we'll talk about it in the context of, of the office. Just a quick reflection on the comment about the the industrial wave working in the energy and the broader industrial sector for, for many, many years now. You know, at times you can't help think that the companies they're a slight laggards, perhaps with them in comparison to other industries. But at the same time, the the advances in AI now make that opportunity further afield. So immense and and so exciting at the same time. So we look at approaches like what I'll talk through now to help our our most innovative clients through that journey. So could we have the next slide please. Perfect. Thank you. So so put up the next slide to the studio -please show that. -Yeah yeah it's on now. Um, so in that in that context, you know, Microsoft and our great partner gal have developed this, this model that you can see on the page here that we call the the AI Innovation Office. And it's one that enables our customers to, you know, pretty rapidly realize value, leveraging both Cegal's domain expertise and then the AI capability from from Microsoft. I'll say a couple of things about this model and then spin. We'll add some points to. The most important thing I'd say is, is what you see in the center of the diagram, the critical collaboration between our two organizations and the client that's in the picture for that particular innovation office, that that office in the middle. It leads the thinking on the why, where and how AI capabilities are to be developed and deployed. You know, it'll work with and across multiple buyers in that client to identify, shape and scope out opportunities where I can make a significant impact. And we think that process is really important. I'm seeing it firsthand, directly in many, many clients that lots of people can talk, you know, semi vaguely about opportunities in the business. Lots of people can talk about the tech and how it's evolving, but bringing that together in alignment for for value creation, you know, that's hard work. It takes effort. And that's where this collaboration, you know, pays pays dividends. Of course, the office itself is, you know, um, intended to identify and pick up on complex problems. It's not to tackle issues with, you know, readily available solutions. We wouldn't need the office in that case. And then the left hand side of the diagram is really the kind of the engine room. Right. So what you have there is the domain and solution expertise of Cegal. Um, years and years of experience in, in specialist areas and also the AI expertise of Microsoft on the lower left hand side there where we've got our energy industry CTO office collaborating with Microsoft Research, and then a broader innovation ecosystem behind that. So it's these multiple elements together that, you know, deliver this package of the AI innovation office. And it's one that I'm sort of really proud to be a part of and that I'm seeing take pick up a lot of traction across our -clients spinning honor. -So Swaroop. So seeing from the client's point of view what is their. Kind of what what what does this bring to their business? Yeah. Fantastic question. You know, we can sometimes get ourselves in knots talking about what we bring, but let's bring it back to to the client perspective. And that proposition is really clear and distinct. Okay. So clients can further differentiate themselves by developing capabilities to solve some problems that are previously unsolved by technology and problems that are going to drive tangible value. Definitely not science experiments. So that's really point number one. We wouldn't do this if it wasn't for that. And in doing so, they get access to the latest and greatest thinking and collaborate with the brightest, brightest domain and minds across both of our organizations. Right. So across the energy team from our side, the research group, as well as the various streams from Cegal and finally a structured program like this is a fantastic way for clients to drive stick change in their own skills and capabilities. So getting getting some business outcomes, that's that's one thing you might sort of drive value in an area of, of maintenance or forecasting. Fantastic. But at the at the end of a period of time you can say, right, I now have an AI team internal that's skilled up, and now we can run this on our own. That's a fantastic outcome to. And I think that that is also my kind of understanding of where we need to go. We see that technology moves so incredibly quickly that our clients may be can't keep up. So with the innovation office, we can together help upskill the clients. We also see early on we will utilizing this technology both internally and with our clients, that we need an operational mindset behind everything that goes on with the with the AI. So it's really key to to think about this for the long run. And it was good for the previous talker talked about his kind of AI unit. We need to help our clients kind of establishing. That in the long run. Mm. -So. Okay. I was just going. -To check in on time. How we're doing. Yeah, I see we're talking. We have like, five minutes left, I guess to, to to to discuss a little bit further and. Where can can we talk about how to succeed with these initiatives. Aspen and what what's key to -that success. -I think we touched upon a few very important aspects domain know how, but what we are and the technology that Microsoft brings that allows us to do these things. But what we see is really the quality of the data that we are working with. Is is key, because if you just toss a lot of data towards an AI, you don't get a good result. So that's why Cegal has relevant products and services that can help you like doing a proper data assessment of your data estate, seeing the quality of your data, getting integration capabilities in place to to wire these things together, and also with our data science offerings that we bring to the market with our offering. We can we can then get access to the hard to get data in a very programmatic way. And that is really fit for purpose in terms of AI. So with the combination of these things, we can mix with the Microsoft capabilities. In an AI innovation office. We see great opportunities to help moving forward at the faster -pace. -It's very interesting. I see I see time is flying here. And there's one question that I like to to, to raise to use towards the end here. And we talked about organizations being in different categories and then what is the right fit of organization for a model that, based on what we have now discussed with the AI innovation office and around these things. Yep. Let's cover that one. That's a that's a very good good question. So let me highlight a few points here. The first one builds on what espnW was just covering which mean I'll use a different word foundations maybe. But the date is a critical part of that. So so some baseline level of digital and foundation we find must be in place to to pick up something like this. Along with that foundation comes an innovation mindset and a genuine appetite amongst both kind of leadership and the, the, the folks that are going to be doing the work to try and push the boundaries in solving those previously unsolved problems. And alongside that comes a bit of risk appetite, right? So the willingness to take some risks in the innovation effort, because the the definition of innovation is not not everything's going to going to succeed. So you might try ten or so, you know, proof of concepts. And it might be only a handful of those that that progress. So that appetite to take on that that amount of risk has to be, has to be present. And um, I think with that said, that probably gives a good high level overview of where we we see a good fit. Fantastic. I see time always is flying in these very exciting and interesting discussions. I know there's a few questions we will distribute them afterwards to to to both of you, but I want to thank you for taking time this afternoon and and guiding us through your views in this area and also to Aspen for doing the same. We will be back in 4 or 5 minutes with the next presentation. And again, thank you very much, both of you. And we're back again. It's great to have all of you here. And the next presentation that we're going to to work on now is by Dr. Turbo Freud. What we see more and more in the business. And are you there? Good to see you. It's really good to have you here. We are sharing offices in Oslo, so I'm over in Stavanger today. But I'm glad to see that you found a place to sit and present. But what we see in our business. Now, back to the introduction table, is that the ethics and the governance in AI is becoming more and more important. We see regulations from EU. There's multiple things that needs to be dived into in order to make sure that you run around the ethics of governance. You're supposed to. I know that this is something that you have spent a lot of time on and together with your data science team in Seagull, and it will be very interesting to to hear your thoughts around this. So whenever you're ready, -please feel free to start. -Yeah. Um. Thanks, everybody to be there. And thank you for the form that I can present during this tech day today. Yes. I will be presenting a quick overview on ethics and AI and governance. It's such a big week that I decided to focus on a key, key concepts from my point of view on, uh, on before I think we jump in, I think it's important that we actually redefine if that works. Yes, seems to work. I think it's really important that we define like some terms. So I the different I at this core is programed algorithm which are a set of step by step procedure or rule that helps solve a problem or make decision on the other side. Ethics in AI or ethics refer to the moral philosophy on a set of guidelines that regulate how we create, use, and manage artificial intelligence technology. So if we go to a very simple definition of AI, you get an input, the data inputs, you apply an algorithm and you get an output. This is not evil, not good, not safe or dangerous is just a tool. Like a weapon can be used either to defend or to attacked. So when we look link these two notion, I want it to actually split into two parts. They sound way more related to your data and more related to the AI. On the first key point, I would like to emphasize this data security. It might sound obvious, but think it's always good to hammer it so when it's come to. Data security. We can think of data and no sorry and no minimization as another window popping up. Anonymization means that all the private. Uh, information need to have been removed, so we cannot identify where the data is from. We need to think as well about the security and where like is the data, where where is the location of the data it sits is secure. And as well the the transmission between two storage is as well secured. We need to know who has access. So who has access on when we need to log it. We can think as well of data minimization, meaning that as a data practitioner, you should have access to the only the amount of data you need to perform your work, not more so that in case of leakage or breach of data, then the damage are minimum and as well. Think one of the most most important is transparency and concerns. So individuals should know when and where the data has been accessed on use. And they should have the opportunity to view, modify, correct or even to delete that data. I think a good example when it come to data privacy. It's. That back in the days it was a feature on zoom. When we call the LinkedIn Sales Navigator on when you were in the call, you could see actually the LinkedIn profile of the other person, but without that person knowing. Because if you remember when you go and you LinkedIn, you know who, who has who, when someone have seen you profile on who most of the time on. So that feature has been since removed, but think that the most important is that the constant is key. Out of the big point. When it's come to data, it's like all our data sets are biased on that. There is a real there is a real gap between the data set and the reality on that. When we train our model to such data, then lead to unfortunate, unfortunate situation like racial discrimination. So maybe you remember this video in 2009, computer had the face tracking system in the webcam. It was working very well with white face people, but when it was a black person in the field of view of the camera, the system didn't work anymore. So I think this is very important to keep in mind. Uh when that the actually this bias in data set on the AI, the algorithm will just exacerbate that bias on show it in clear. Unfortunately that's bias can also lead to some gender discrimination. So from what if you use the tool called Leonardo Dot, which is an image rater, and you just have the keyword a CEO, then the system will generate pictures of the white male from the 52 years old. And if you just up a cleaner, you will get a female with a on, usually with some like ethnic origin. Um, a more actually a more unfortunate example it's Amazon. So in they develop in 2014 they develop actually tool to grade the CV for some software development jobs. And they realize by 2015 that actually. When it was the it was the word woman written. So search like women's chess club captain that was downgraded on the earlier that that was because on the data set they used to train. So the regime the last ten years, the CV from their last ten years, it was mostly male because this, this domain is mostly dominated by male. So that is something to keep in mind. And now if we jump to the more the aspect of ethic, uh, we can just list like five of them, but think we can have a long list and we have already covered a few more. But I want to emphasize actually on the accountability, on the transparency. So nowadays we use a lot of deep learning on their black box, meaning that when we give an input and we get the outputs, we have no idea how the machine or the algorithm actually came up to that, to that output. And it's very important that we know the reason is like for instance, if we are agree to feature. So some micro grids, you might want to make some autonomous decision on. You might for instance, buy such system from I.T. company or startup. And most of the time the end user in the grid company. They have no idea how that system works. They are not experts, and when the system in place will make a decision, they have no, they might not have any idea how that decision has been made. So it's very important that we understand how that works. So like we know on on what base data on what by basis data on the analysis that system has, has done a decision. And especially when we are the spending of instance of public money or some high electricity prices. And other aspects. Like by knowing how it works, we can control it and we can sometimes override it if we are not. If we don't agree with how the the AI algorithm did is present a decision, um, after the data on the algorithm itself, I want to as well bring a third party. So when we develop a tool, the end user is actually the one that will use it. And most of the time this user has mentioned they are not experts and they might use the tool in the wrong way. So that happened for instance, like um, a man got his knee injured by a metal food and drink trolley and actually sued the company. The lawyer bring six court cases to build the argument, but the issue was that these court cases did not exist. The lawyer used actually GPT to help him, and GPT just elucidated. So the lawyer and the firm has been five fine to $5,000, and they don't know if they will appeal to that decision. A more actually think risky issue is because it come to intellectual property is with Samsung. So in two cases, the first one is when an employee, after a confidential meeting, took some notes on uploaded these notes to GPT to get to sum up. And the second case is. That's a technical person had a bug in his code from proprietary chip on. Upload that code on to get some help for debugging. But the issue is like. Charity is using the input from the chat to retrain the model. So now that data from Samsung is within and Mike leak. Bye bye. So that is a very, very issue. So this is why the end user as well is another aspect that is very important. -Um. -We can as well think about ethics, like who is leading the dance. So nowadays, like the big biggest actor, uh, the, the gaffer and as well open OpenAI with their I mean, they are now the big wave of we have Facebook that meta. We own a lot of the, a lot of um. A lot of our data on Microsoft Google. So why? I mean what they are because they have a lot of money so they can hire the best researcher on the most skilled people. They can as well train generative AI. For instance, chatbot is estimated to be more than 100 million on academia doesn't have such money, but they are coming back. They are trying to put money on the table to actually be in the lead on that is software. But this is really the dominance of hardware, and that is where Nvidia is actually dominating that domain with their GPU chip. Um, and when it comes to AI is come as well to data on to access to that data. And recently I want to take the two example of Twitter and Reddit that closed the open API. So now you need to pay to access that data. And we know that there is a lot of research done based on the Twitter weather, on the tweets. And so now we have to pay for it basically. And we have as well some app that was actually using the Reddit API to to work on. Now they, they cannot, they cannot afford to run anymore. So yeah, I think that open source is very important. And the question is like how do we govern such technology? Should we on one side be very strict, like what Italy did in the end of March? They just banned GPT because they were worried that they were using some private data from their from Italian to train or on, since of course actually regulated or now they are back in business in Italy. Or should we be like more Norway, which is much more not relaxed, but in a way they want they don't want to impair the innovation on. They actually like last week they released their is the guy because now in Norway we have actually a ministry of digitization on on the 8th of November, they actually launched a Norwegian guide to prevent discrimination on. So these are two extreme approach. And when you come to regulation also end of 2021, the Unesco will release the framework called the recommendation on the Ethic of Artificial Intelligence. And all 193 member states actually abide adopted it. And in a way, we have this pro innovation approach to regulation. And Europe actually, who is one of the leader in terms of regulation. We will have they have actually these acts on that act will be at the earliest in place in 2024 on the aim is really to regulate AI. On how that will work is they will take what they call a risk based approach, meaning that depending where your algorithm will be in place, then you will you will actually belong to four categories. So we have the minimal low risk, high risk on an unacceptable risk. And based on these different category then these different variations that will be in place. So for instance if you is used for social scoring this is prohibited. So you cannot use you cannot use actually search for search. In Europe you have observed the high risk. So it's going to be recruitment on medical device. Then this algorithm will have to comply with a lot of requirement. And so it has to be transparency. So meaning that we need to know how the work on what data it has been trained on. This model will have to be registering to your database. So I invite everybody to read. It is very interesting. And if now we look at more into this apply to. To the energy sector. So these were paper from quest from last year when they actually try to map. So the different activities of the energy sector from the different risk on the X on actually the maturity of of these different activities and what we if we zoom in so we can see on the top right corner, we can see that if you are using AI for fraud of turn detection, that will not be allowed within the framework, but on as well that if you use for, for for predictive maintenance, drone power maintenance or legacy trading, that system will actually we we need to abide by all these rules transparency. We need to know the what data they use. You have to be open. So I think this is a very big impact on that. Every all the people involved in their share should read that paper is very interesting. Another paper trying to see how well the act actually covered the the activity in the energy sector. And they detect that, yes, they were covering some of the societal risk, but they were we were still actually some limitations. So for instance, let's take an example, the decline of human autonomy. So if we have autonomous system making a decision, how can we override that decision in term if we do not agree or if time of emergency or if there is a cyber attack? That was something that was not very well covered by the act. These as well, like if we get to collect platform mean like dominance, for instance with Google on the ads or the search engine, how will that be regulated? Because when you have a dominance, you can when you are big, you can offer similar service at a cheaper price on as well because it's very complex from the price electricity therefore that it could be as well as some price and reputation on that is not well covered by the act. So as well I invite everybody involved to read that paper is very interesting and it's lead to a lot of reflection to how we do business. But to conclude, I want to actually emphasize that anticipation is the key. And how do you anticipate is actually by assessing our dataset biased. We know that there is actually a gap between the data and the reality we need. We need to know how model AI model, and we need to monitor it when they are put in production. We need to as well to involve the impacted population. So because when you are in the project, we are so focused that we lack the the overview. So having a third party outsider review will help us and will hammer that we need to test, test and test meaning that you don't go from development to production in order so that you need to test, not to limit the damage on the on the social. Yeah, the on the impact. And as well again think that as well. This is point. We need to teach and train the end user. -Thank you. -Very much for very interesting presentation. I think this about involving and training the end users like myself, is a very important part of this. I see it with my kids, I see it with myself. And do you have any sort of guidance on how this could be done in? If you like. A good -way. -What do you mean to? You mean to teach -the people. -To teach people how to be proactive in what actually you get out of all these opportunities like that? When we receive our data in like we have discussed today on, on the on the different scientific projects, how can we be critical to the data here because it's so easy to believe. Yeah, I mean these these are sort of benchmark nowadays to actually oh, actually academia is trying to come up with a lot of benchmark for your model to assess some of these criteria, but think you as mentioned, you need to involve the end users on the impacted population so that you have directly because the AI is is to serve someone a business case. So we assume that we have that at the end of the line, the engineer, they have the domain knowledge of that issue on the ice here to help them. So if we involve them in actually in the process, then we do. It's like we do a good market fit. You need to know if your system actually we answer the question or need to test. So it seems that you need to have good metrics on metrics that really. Yeah. Measure. Well you issue -or your business case. -Exactly, exactly. And I think we're in I heard a lecture about this, comparing it a little bit with the internet in the late or mid late 90s when when it came along. We need to understand how we can use this technology in a good way. And I guess we need to train ourself in the same way we have trained ourselves and getting the fantastic opportunities through internet also on this AI and and this in development we're -experiencing. -Yes, I agree on something that we need to be as well aware we shouldn't be rigid. So for instance when the SMS so you mentioned the internet, but when the SMS was invented, they never thought that that would be something. They thought that people would just be on the phone. But now we see that everybody is texting. Even now we are just writing. So we need to be flexible that what we predict now might not be the truth. So yeah, that is me. Have a have a red thread. But don't be stubborn. Be flexible that you might have to pivot in some way. Exactly. Well, thank you very much for your presentation and very interesting area to you to get some more understanding of. And we will now have a short break setting up the next session. That will start in about five minutes. So fill your coffee cups, prepare your keyboard for some questions and we're ready in a few minutes for the next presenters. Welcome back to this next session. This is really in the core of my heart when it comes to adapting research with commercial software. We have had an engagement or a collaboration with the Norwegian Technical Institute over the last year or so, looking at how we can use their algorithms and their research and deploy that into commercial software, that third party companies working in the carbon capture and storage domain can use. So with me today, I have Yong Chow and Yung Chan Choy and Twitter Holland Yung Chung. You are from Ngai, the Norwegian Technical Institute and today is from Cegal. And you will guide us through how you have worked together on this particular project and both of you on Yung Trump, do you have been working a lot on these geo mechanics and how to provide geo mechanics, both on the static, but mostly on the dynamic side of things? And two, that has been the enabler of this, if you like, through the development into commercial software. So whenever you're ready, young John, please start -your presentation. -Okay. So thanks for the kind introduction. So good morning everyone. So my name is Zhongshan Choi as mentioned. So I'm working at GI. And then today I would like to introduce a collaboration between Ng and Seagull and the Seagull and enjoy. And then as I mentioned our collaboration is to find a kind of efficient way to deliver the knowledge developed from the main research project to the commercial software. And then this presentation have been prepared together with today odorant from Seagull. And then also we will share this presentation together. Then in this presentation, first I would like to introduce our nice research on the fourth year mechanics and the geo mechanical risk assessment and the associated tools and the workflow. Then I will briefly address the kind of challenges on the knowledge transfer at the end, at the end of my presentation. Then today we will take over this presentation and show you how these kind of challenges in the communication and the knowledge transfer have been solved through Saga software, -particularly Prism. -The. As in this collaboration, we have focused on the fourth year mechanics. And then the reason that we have focus on the fourth geo mechanics is required to relate to the complex and inhomogeneous environment in the region and the surroundings. As you know, inject injecting CO2 into the reservoir is not injecting fluid into the containers. Some at the reserve wrote that we need to inject CO2, have some degree of fractures, and the faults and the scale of the fractures can be from few centimeters to a few kilometers, and a few square or few kilometers and a few tenths of kind of meters, then the we can call it as efforts. And then also the sports and the fracture can be known and then unknown. But here important point is that those kind of fractures and then the fourth one of the weakest points. So by if there is a small change in the stress condition around the fractures by injecting CO2, then those kind of fractures can be easily deactivated. And then it can open the wreckage path away. So that's why it's very crucial to screen such a kind of critically oriented faults in the early phase of CO2 storage development. The because of their region and energy, have no cost on the large of airports on developing streams. Streamline the workflow and then the tools that can be used for the robust default stability assessment. The additions in the workflow in this slide, in order to make a Lobos to step forward to stability assessment in the geological steps and the inputs from the disciplines. Particularly in the early phase of field development. Somewhat. We need to do some kind of preliminary Gruber on shifts, but the one of the challenge in the early phase, global analysis, is quite related to the lack of the information. So probably it's the same for the on other disciplines or other disciplines. But typically it's very difficult to get the good data coverage in the early phase of field development. Also the data quality is quite poor. So because of their region, some of their their the inputs should have a lot of uncertainties. And then the for the for the this preliminary global analysis, how to consider the contribution of the search uncertainty input to the output properly is one of the key. So because of their region so Ngai have developed and enhanced tours. It is a strength, then the strength can analyze the strength acting on the fractures and the faults, and then also it can assess the stability of the faults in both stochastically and then the both deterministic and stochastic manners. Okay. This slide shows the one of the example how this process can be used, used to optimize the resource planning. Then the thread is a Python library so that people can easily, um, import crayfish in the stress. Then the people can assign the input parameters, including uncertainty range into the instance or object in the stress. Then the stress can generate can result in the distribution of the stability value, as shown in the histogram in the middle of this presentation. And then also it may create the probability of failure and the feet ten and 50 and 90 values. And in addition to that those kind of probability of failure also stress can provide input sensitivity which means the sensitivity the input sensitivity means that the contribution of uncertainty in the input parameter to the uncertainty in the stability value. So in this slide also you can in the right panel you can see some kind of bar chart. And then particularly in this example the the uncertainty of the stability value is mostly governed by uncertainty in the value which is related to horizontal stress condition. This result indicate that in order to reduce the uncertainty in the prediction. So. Reducing the uncertainty in the net is the best way. It's better than reducing the uncertainty in other parameters. So then the operator can put more effort on the kind of rig of test to reduce the uncertainty in the horizontal strength, rather than the putting more effort. The under lab test. The fourth interpretation, and also based on the sensitivity values and the model can be updated for to optimize the resource planning. This is one of the example that stress can be used for the early phase of field development, and then once if if we need a more kind detailed risk assessment, then we can move to the finite element to modeling. And then for the finite element modeling we need more information than the preliminary analysis. We need to improve reasonable pressure from the reserve machine raters. And then more kind of horizons from the model. And then also we need geophysical in the petrochemical input. And sometimes we would like to combine effect of CO2 injection on the seabed to hazard. Then also we need to implement a detailed geo technical information for the kind to make such a kind of detailed 3D geometric. So one of the challenge is truth. They can extract the data from other disciplines and then integrated them into the readable to the to the geometric smoother. Fraudsters also have developed an in-house Python script together with the Fortran user subroutines. But still, although we have our procedure. But still this procedure is quite unusual. Friendly. And then also in particular when we try to import the data from other disciplines. So in need of sort of manual work and then some some other adjustment. So because of their region. So it can be a one of the kind order or a challenge when we try to deliver our kind of technology to the kind of industry. So in order to make transport our knowledge to the industry, we need a more seamless and integrated kind of workflow that can be easily combined with the kind of petrol and the regional simulators. And also in addition to challenge in the model building, also, we met some challenge in the deliverables and the communication with other disciplines, because most cases, the end result of our finite element modeling or risk assessment is in a kind report or a kind of text file, such as Ascii file and then some third party software. In such a case. So. But many cases kind of other people in the other discipline want to see this result into the kind of common top tier problems such as petrol. Then the if we can provide our information to the petrol or soap to a platform, then the it can be a kind of it can it can limit the deliver. It can limit the kind of applicability of the our kind of research product for the field, the kind of case in that effect. So we found the great potential from the figure the soap tier. So then motivate us to to collaborate with the seller. So then the today can take over this presentation and then can really show you how these challenges have been solved through the Sega's software, particularly the prism. Yeah. Yeah. Thank you. Young chop chop. So that's clearly NGO wants to share their tools and make it easy for industry professionals to use them, as it's a lot of work for them to be running the analysis and interpreting the results on the user's behalf. At the same time, they want to develop vendor neutral solutions and not tie themselves too deeply into any technology choice. This way, NGOs can provide services to a larger market as possible, and NGO achieves vendor neutrality by reading and writing open file formats. Now, this comes at the expense of the user experience of their tools, as they become harder to use for users of other applications. So exporting the right data in the correct format from proprietary applications is time consuming and error prone. Process. And relating the insights from the analysis to existing data is not very convenient if it's presented outside of the context where all your other data is. Similarly, importing the results of the analysis back into these big applications can be as challenging as exporting it. Were. So we've been exploring the use of single prism for a lightweight integration with petrol. Petrol is a proprietary software platform with significant market penetration developed by SLB, and we think Gold Prism can make it easier to share the tools with petrol users, while supporting NGO's overall goal of vendor neutrality. Prism is a system made by Seagull that enables hybrid cloud analysis of data sitting in petrol. It has components that that integrate with petrol. It has a message broker that relays messages between services, its own auth service and code runners that run Python workloads on users data. It can be run in a local mode, where execution of the Python workloads is constrained to the user's machine, but it can also be run in a hybrid cloud mode, where jobs can be handled by cloud services. As prism is a relatively complex distributed systems, different users of it will be interacting with different parts of it. So for the end user, it will be this dialog pane presented as a petrol plugin, where they can choose different workflows that they can use to transform and manipulate the data sitting in their petrol projects. In the petrol pane, they get familiar controls where they can select petrol objects, and in general, they shouldn't send any of the implementation details of the prism, the system. It should just be any other plugin that they used. So for developers, it's a small Python wrapper that enables them to easily pull and push data from and to petrol. And a way of integrating their solution into petrol as a native UI. And from the Python wrapper, you can pretty much do anything you want. You have a Python library? Yeah, great. Then you can call the functions from the wrapper. And if you have an executable binary, you can launch it as a subprocess and use some inter process communication technology to interact with it. And then if you have a web service you can hit the API from your Python wrapper. And then finally for department, it's an opportunity to enable users to do Python scripting on their petrol data without distributing Python to the machines themselves. Instead, the partner can optionally we run remotely on a centrally managed platform. So systems supporting Prism need to be connected to a Python environment, and the simplest way of doing that is deploying it locally and have the whole system running on the user's machine and getting Python running on. Enterprise workstations can be a real pain. So if you integrate with Prison Deploy, this way you don't have to deal with that yourself. But then running Python remotely has some real advantages. Also, for the people who make the workflows. Specifically in the remote setting, you can explore solutions requiring deeper integration with third party applications, and these can be very difficult to deploy to end user machines. So remote execution environments shifts the focus from ensuring that your component works on the user's machine, to designing specialized virtual machines or containerized solutions you can deploy to the cloud, and that can, in a lot of cases, be simpler. So that's an example. Let's return to the workflow for a full stability assessment slide from before. And the preliminary global analysis can all be done from Python. So it's easy to distribute the tool to end users as is a standalone Python library. On the other hand, the geo mechanical analysis analysis part of the workflow has a dependency on abacus. Abacus is a proprietary finite element modeling solution with a Python API, and since it has a Python API, it can be readily be wrapped into a prism workflow, and it can be executed from petrol by end users and with prism support for both local and remote execution of workflows, we can evaluate solutions where abacus is run either locally or remotely. So to summarize, by integrating with petrol, we reduce the amount of work required to move data between petrol and applications. Writing data back into petrol also improves the user experience. Now, the industry decision makers can use familiar tools to analyze the results of the analysis and. We have prepared a video on how this looks in practice. It's one minute long and I've added annotations in the lower right corner. Some parts of it is of the video sped up to maintain brevity while highlighting all essential steps. So in the video you will see me running that the terminal part of Rockstar's Rockstar is one of the preliminary global assessment tools as. And we run it as an integrated workflow from petrol. I will also briefly show some of the source code to show how simple it is to define the UI elements of the that we present to the user. And finally, I will show how the results of the analysis are available from within petrol. So here you see the project. It has a great find a plug in view. I find my workflow. I can set some parameters. And then here is the source code. These three lines highlight. Here are where we show what the controls are for the user. And the user doesn't have to know anything about it. It can just run it from within petrol without ever leaving it. And the project is changed with the new data. And then from here you can visualize the results of the analysis as grid properties. Or you can show them as full properties. And if there are anything in particular you're interested in, you can use segment filters and zone filters to sort of target your visualization, just as within any other grid property. So as you can see, we have a very natural integration between the developed Python library and petrol. It's easy to use for the petrol user, and it's easy and fast to make for the developer that does the integration. So to conclude, through our strategic partnership, we want to advance carbon capture and storage technologies. And we will do that by leveraging engineers geo mechanics expertise and Seagal's commercialization capabilities. One way we are working to achieve this is by making it easier to use tools, by integrating them with petrol through prison. This brings state of the art tools to the industry as fast as possible, and providing industry decision makers with the tools amplifies the impact of the researchers and fosters better communication between industry leaders and researchers. So with that, I thank you for the attention, your attention, and we are open for questions. Thank you very much, Tudor. And it was a very interesting presentation, as you all see. Now we're moving from the high level down to the details now and have the have the technical presentations moving forward. Now one thing to from I guess there's, there's, there's two things, but one in particular that I like to to, to ask you this cooperation between between different academic or different other organizations. It sort of puts you on the edge of, of of what's happening in the research. And, and how do you see other partners coming in to an environment like this one moving forward? Is is this an easy step to take, actually to test the hypothesis and work with, with with commercial software together with with with academic solutions? Maybe. First tutorial. Has this been a smooth sort of interaction in the way it's worked? Well, it doesn't really make anything foster to do, but it really. But what it does simplify and make more efficient is bringing the proof of concepts into the decision maker space as fast as possible. So the alternative for if they they wanted to deliver this would be to go the route of like a commercial software or plug ins themselves. And and that's a big undertaking. So with prism and sort of this Python wrapper that that integrates their solution into like the commercial platforms, they, they really can make the, they will be able to to bring the tool where it makes a difference, a big difference first possible. Thank you. And yes, I apologize I see suddenly time is flying again. I got but I just at the end. I just want to thank you very much for this. As I said, this is this is really I think this this these gaps or bridging these gaps is a key thing for me. And I see this is really building that extra momentum to fast tests, as you said. But this is out in the industry. So with that, I want to thank both of you for your presentations. And we will be back in five minutes with with the next one. Thank you very much. Know who they are, you know. Oh. Welcome back. It is my pleasure to introduce the next speaker. I know you will be. Has been working for Lundeen now, aka BP, and has been one of the strong stroke focused a lot on using both AI and ML ML for seismic imaging, and one of the key areas that AI now has been very focused on through this whole period. She's been working with these scripts and and providing the information to the end user is making sure that not only the experts, but everyone within the Lundin or BP organization has been enabled with their results. Ina has a PhD in geophysics and on automated seismic interpretation and has been working, as I mentioned, for Lundin and Aqib, to optimize this, the value of the seismic data. And I've been lucky enough to to watch a few of the things you've done right now. And I'm very much looking forward to this presentation on the subject. So when you're ready, please feel free to start. Yes, I think I'm ready. So thank you for the introduction so far. So as I said, my name is Ina. My my title now is Data Scientist. But my background is is really in in geophysics. And in this talk I want to focus on how we have used machine learning in seismic interpretation and seismic processing, and specifically talking about how we have prepared generalized pre-training, pre-trained models for seismic processing. And what I will focus on is the way to go from proof of concept to these integrated, AI driven seismic workflows, and I'll share some of the lessons we have learned along the way. And then if I have some time towards the end, I will briefly mention some some new stuff we're working on now, but that's just a little fun. Detour towards the the end. Um. Okay. Um. So, um. Our in-house processing and imaging team focus on routine and specific data washing of our seismic data. And this is really, um, done together with our geologists and interpreters. And our goal is always to achieve the best possible seismic images so that we can get the most information as possible out of our seismic data. As seismic data is, is really expensive. And and obviously, we we need to to extract as much information as we can from them. And the last few years, we have seen several breakthroughs in AI driven workflows that can be applied on seismic data for for both interpretation and processing purposes. But there has been quite a long way from this proof of concept stage to actually having AI driven workflows that can replace the existing workflows. But some stuff that has helped has been that we now have really powerful workstations and GPUs available. There are a lot of easy to use open source code and libraries such as TensorFlow, and there are a lot of publications out there, maybe even more from other domains than actually seismic. But there are a lot that we can can borrow or steal from from, for example, the image processing domain, medical imaging, and even even language models. And of course, there's there's the notion of of maturity. We, we have, uh, we have to integrate geoscience and data science. And that has been somewhat of a, of a journey. So we are not a software company. We don't make software and we're not a service company, so we don't provide AI services. So but we still want to take advantage of state of the art technology. So so the question is how can we how can we do that and still be sort of in the forefront? Because we know machine learning is just in general good at imitating workflows. And we can therefore train machine learning models to learn geophysical workflows. One quite simple example is to train a machine learning model to, for example, detect or remove unwanted signal from from seismic data. And on the left side of my slide here you see an example of how we can do that. We need to then first prepare some sort of training data. Then we need to to train our model and finally apply our model to to to our seismic data. And it's really the first step here the preparation of training data, which is particularly tricky. And it's really one of the most crucial elements in seismic machine learning tasks. Um, so I want to to talk a little bit about this and how we can prepare our training data. So there's really two main ways of doing it. One is what we have called the copycat method, which is to manually prepare labels. In this way, you need to apply the conventional geophysical algorithm first so that you have your training data before and after doing some sort of of of data processing. So obviously when you prepare training data this way, you cannot expect better results than your original algorithm. Hence the copycat method. But you can still save a lot of time. And it's still super valuable because you can can train models that can be applied to to huge data sets. But you cannot expect better results than your original algorithm. The other way of preparing training data is what we have called a model driven approach, which is where we prepare synthetic training data. So of course, this is this is quite a lot of work because it's difficult to create synthetic seismic data that looks like real seismic data. But we do have the possibility to to train sort of to train on sort of perfect training data, because let's say if we have an example where we want to remove noise, we can have perfect data that we know is completely noise free, and then we can add synthetic noise. And we have a perfect training data pair. And this is actually the example that I have on my my next slide here. This is sort of a very simplified slide. Also, I want to note that even though all my examples here are 2D figures, the approach is is done on 3D data with 3D synthetics. But in this example I show here how you can prepare synthetic seismic training data so that you in the end will have synthetics with and without noise. And then the next step would be to to train the machine learning model and to train the model to remove the unwanted noise. And because we train on synthetic data, we can represent many different seismic characteristics so that our final model is really dataset independent and can be applied to, to, let's say, any seismic dataset. One example on the right side of the slide here, where you see the input and output before and after, we have applied the trained model to to remove noise. Of course. Very simple example. I have some other examples on my next slide here. So here you see, if we go from from left, you see an example where we have attenuated some multiples. So this is done with a generalized pre-trained model that has been trained only on synthetic data here applied on on real seismic data. So the first example removal of of multiples unwanted energy. The next one we are removing this crisscrossed noise or deep noise. And in the third example we are removing these typical migration artifacts that we can see in a lot of our seismic volumes, often referred to as smiles. So having these pre-trained models allows us to to apply them to our data sets. Right away we don't need any parameter setting or anything tuning of the models or anything like that. So it allows us to to really process the seismic data a lot faster than if we would have had to do it with conventional seismic processing algorithms. But still, we want to make these models available to our interpreters, to the people actually looking at the seismic data. And of course, we could have just applied the models to their data for them. But that's not as fun. And also then we would probably lose a lot of work. So what we we did back in London and I now have in our crew is that we used Seagal's Python tool Pro, which is part of the prism system. That tool talked about in the in the previous presentation. And we made our in-house Python code, and we made a small in-house group where a user could select the seismic data that we're working on and select the pre-trained model they wanted to apply. Then the GUI, the Python tool Pro connected directly to patrol so that the user only saw. So patrol really the whole time, and the input and output seismic was input only the seismic software platform and one. Very important benefit of this is also that we avoid any data export and import, which we usually would have to do to to do some processing on seismic data. So. With the query and the Python tool Pro, we allowed our interpreters to. Really get super fast results directly in their interpretation platform. So you see a screenshot of petrol down in the in the right corner. And I'm just going to flip flop. Just show an example of what before and after application of one of our models could look like. So in just one click, without really having to understand any data processing or any machine learning, the interpreters could could just get an on the fly result in there in their seismic workstation. Our software platform. So this is not company rules. This is just some of my my bullet points. But as I also mentioned in the introduction. We really want to be able to use state of the art technology. However, we do not want to wait until the technology is incorporated in our everyday software because that takes too long and we don't want to develop new software because we're not a software company and we are able to do some in-house coding. However, we know from experience that code will be used by a very, very few handful of people. So the one click together with with the Python Tool Pro. That's one example that has been a success, where we have been able to make these workflows available to the interpreters. However, we cannot really create these guys. Every time we write up a code and we don't really have the capacity to maintain the use. For example, you see that the sign of the gays is outdated by a year or so. So. That's, uh. So this is sort of my bridge over to to Python workflow runner. So I was super excited to to try and test this workflow runner early because that could potentially solve some of our issues. And Tyra talked about the workflow runner in the previous presentation, so I'm not going to present it or talk about what lies behind. I'm just going to show super quickly how how I used it. And so what I did and what I was able to do in, in just really a few few short hours is to get some of my existing Python code up as, as workflows in patrol. Um, so the workflow runner allows us to create patrol workflows with only adding a few lines of code in our scripts. And um. Here I'm showing the one click application as now a patrol workflow. So this means that just by changing a few lines of code, we're now able to have the one click input trail as a as a as a workflow. And we don't even need the UI anymore. So we don't need to update the design or maintain our code for for the actual anymore. Which is super nice. And also because we know our petrol users, they like to stay in petrol. They don't have to open any any external UI's or other software. And again same as with with Python tool pro and and the are previous. We're still handling everything in patrol and we have our input and output data also only in petrol. So I just want to summarize some of our lessons learned before I move on to something a bit new. But something that we have learned is that it's super fun to to try and and take advantage of state of the art AI workflows and proof of concepts are super fun to make, but there is a significant effort required to create tools that we can actually use in. We have learned that the new tools that we create, they have to be super easy to use, like one click solutions, for example, or they have to provide a significantly better results than an existing method or solve something that existing methods couldn't solve. Driven Processing Tools lets our in-house processor focus on more special cases and advanced cases, so we are able to outsource a lot of of the more standard job to the machine with AI driven tools. And we really need our in-house coders and the people with the knowledge to be integrated with the with the interpreters and processors, the domain experts. If not, we're going to make tools that no one needs. And finally software that has Python APIs or Python plugins, or for example, the Python Tool Pro that we can use in connection with petrol, or now the Python workflow runner. This is tools that really lets us integrate the technology and our in-house code with our existing software platforms and super fast, and this is what really facilitates actual use of our workflows. So as I said in the beginning, I wanted to to just briefly, um, touch upon some, some new stuff that we're working on right now. It's just a couple of minutes, just some fun stuff. Um, and so what we have looked into is how we can take advantage of everything happening in with language processing nowadays and how we can use this in seismic processing. And. Um, so this really falls under the umbrella of how we how can we take advantage of state of the art technology? Because recently there has been a significant and rapid development in natural language processing, much thanks to to the transformer architectures. And everyone knows ChatGPT or OpenAI's GPT. A lot of people probably also know Google's Bert Transformers, which is what what we have focused on here. And the Bert language model consists of two, uh, two phases. It's the pre-training phase, and then there's a fine tuning phase. And this is language processing at least so far. And in the pre-training phase. The Bert model learns how to predict missing words in a sentence, and then in the fine tuning case, the Bert model learns to solve some sort of task specific or specific task with labeled data. And last year, some researchers at Coast Horseshoe and Al Khalifa. They proposed an adapted use of birch and how they could use bird on seismic data in a framework that they called store seismic. So in this in this framework, the pre-training would learn how to predict missing seismic traces in a seismic data or a seismic line. And then in the fine tuning, you could, you could, um, train on some label data and solve, for example, some of the cases that I showed earlier, for example, denoising or, or removal of some unwanted energy or detect some specific features. So the real advantage here is that the pre-training phase allows the model to really familiarize with seismic data, real seismic data. And in the fine tuning case, you really only have to make a smaller synthetic data set for example. So we tried this approach on our own data. Uh, just as the, as in the store seismic approach as cost proposed. Where we pre-train our model on real field data, we mask some traces in a seismic data and let the seismic trace represent the word and let the seismic gather represent a sentence. Then we train the the pre-trained model. And in our next step in the fine tuning here we introduce synthetic seismic data. And we fine tune on some label data set where we have perfect training data with and without noise. And finally the model can be applied to to real seismic data again and solve the specific task, but still with the benefit of of really knowing how real seismic characteristics are. So that was a little detour, but just sort of introducing some some new stuff just at the, at the end. And that's. Yeah, that was my last slide. So now I guess over to to you stuff and, and thank you for for letting me present here. Well it's always exciting with new stuff. That's really good to see. And this is our next speaker will actually talk about large language models. So it will be very interesting to see what you're doing with this language models towards the seismic. We have time for one question. And I guess it's it's maybe it's an interesting question when it comes to use of your imaging techniques and on horizon slices and, and then especially towards send in dictates. Have you, have you tried sort of on on geological features how you can, is it possible to use this imaging and have you tested that out in one of your or any of your, -your work? -Well, you mean for for detecting -detecting features. Yes. -Horizontal horizontal slices or horizontal features if you like in in the -seismic. Yeah. -So I mean as, as sort of everyone else we have, we have our own fault model that can detect faults. We have not focused on any other sort of geo bodies or like, like inject types, um, in our work really. Because with a one click we focused mainly on on the processing workflows. However, we have had some external collaborators with other software providers that that have focused on on geo bodies like like injected. So we have had some projects on it, but we have not developed anything in-house. But I guess the store seismic approach can can probably be or the or the language models can probably be of great use here because we can. Tune it much more towards data sets, which has been a problem with the synthetics, because the geo bodies look so different from data set to data set. Yeah. And I guess a final one that is a question that more towards maybe white papers or enabling this for, for others. Do you, do you have any articles that share some of the processing techniques and pre-trained models you have done that is that are available for, for maybe students or the other listening into this presentation? We have not published code. Unfortunately. We do have a paper on generating generating the synthetic seismic data and training the model and all of that. I think that was in geophysics in 20, uh, geophysics in 2021, I think. And that's the same, exact same approach as we have used on on the rest of our, our examples. Um, yeah. And then there should be lots of abstracts on the EEG in the database, but we don't have an article specifically on the one click and unfortunately not on GitHub, which we should have had. But yeah. Okay. Thank you very much. Lauren. I see time is flying. We have about five minutes before the next presentation. But again, I want to thank you very much for your time and we look forward to see how this will develop. Moving. So we'll have a quick five minutes break now and then we will be ready for the next presentation. There you go. Welcome back to this session where I'm very happy to have Thomas Grant with me today. Here with only a firm between us. Thomas has a very interesting background. He is very much into the AI and looking into artificial intelligence and working on that. And at the same time, you have a geological background, which is a fantastic combination. What are you going to talk about here today, Thomas, is how large language models can really change our industry, how that can help. And it was really interesting for the last presentation from Ina here and see how they were starting to use language models towards seismic. So I'm very excited to hear about what you're presenting. So when you're -ready feel free to start. -Okay. Thank you very much for the introduction yesterday. I'll be talking about how AI, so artificial intelligence and large language models or LMS can change the industry, the energy industry. I'll start off by just giving a definition, and I'll go through some of the background and then I'll do some, give you some examples of the things that we can do with this kind of technology. So what are large language models? They are neural network models that are pre-trained on massive amounts of text data. Now, language models have been around for years, but they have, over the last year or so, suddenly become significantly bigger and more powerful. The current models or sort of leading models are tens, if not hundreds of billions of parameters and trained over enormous data sets before the models where we're talking ten, maybe a few hundred million parameters. So they've gone up by orders of magnitude in size. Also, the sheer amounts of data that these models are trained on has got much bigger. So the GPT models, for example, are trained on all of Wikipedia, thousands and thousands of books, and also a huge amount of just information scraped from the internet. These models use something called the self attention mechanism. This is an algorithm, or neural network architecture, that helps the model efficiently learn complex patterns and semantics from human language. It's a much more powerful, much more effective than previous model architectures. This means that we can generate models that are highly skilled at taking in any text input, and returning a human level text response. Some of the earlier models you could get them to do things like write a screenplay for a sequel to your favorite movie franchise. Usually it would come up with some funny or clunky answers. Certainly entertaining, but not useful. Whereas the most recent models, we can actually do pretty useful things with them because of their high level of text generation. So these kind of models, I'm talking about ones that you may be familiar with, such as the GPT models, the GPT models, Matisse, Llama or Google Bards. There are tens, hundreds, if not thousands of these different language models out there which are trained on slightly different data sets or have slightly different skill sets. Generally, these very big models, we call them foundation models. This because they have a very broad range of skill sets that we can use them for many different tasks. And we can also, if we want to fine tune them to be highly skilled at very specific task. I won't be talking too much about fine tuning. Are we talking about how we can use those language models, and how we can build a random to solve specific challenges? There are two things I hope to convince you of today through my examples. The first is that I believe that language models are going to change how we interact with our data in the future, and it's important to stress this word interact. It will change how we access our data and how we work with our data. This might mean that we can save time doing the tasks that we already do, or it might mean that we can do new tasks that we weren't able to do before, either because of our lack of coding knowledge or something like that, that we can do entirely new tasks. The second thing is to view these AI based tools as assistants and not decision makers. So to give you an example, early in the year I wrote a blog about AI and I was struggling to get my writing going, so I just wanted to chat to PC and just to get some ideas of the type of stuff I could include into that blog post. I didn't choose much of what it suggested, but it did get me going on my writing, so it assisted me in that task. But I was the decision maker about what was included into that blog post, so it's important way to to view them as imperfect but very useful assistance. Now in the energy industry, we work with a huge range of different types of data. Our data could be very structured or it could be I have no structure whatsoever. We could be receiving huge volumes of sensor data or time series data every single second. We might be working with different modalities. So we've got numerical data, image data or audio data as well as text data. And all the while all these different data types could be sitting in different applications or different data sources. And what we're looking for is an AI powered system that will help us interact with these different data types and different data sources to be able to do certain things. This might be to extract relevant information. Perhaps we have a lot of reports, tax information. We want to get information out of that and do something with it. Perhaps we want to transform unstructured data in something more structured and useful. And maybe we want to summarize a large volume of data to be something that's easier to digest and read quickly. And then we can ask further questions about that data if we want to. So in order to build these kind of systems, we need some sort of framework to build around these language models. The framework I'll be talking about today is Lang Chain. This is an open source Python library that gives us all the tools that we need to be able to build these kind of systems. There are other frameworks to be aware of, such as that from open AI. They have an API, but they recently in the last few days, I released a kind of no code platform to be able to build AI based applications. We also have Lama index for Meta and Microsoft's semantic kernel. Now the ideas and the concepts in all of these frameworks are largely the same, but some of the terminology is a little bit different. I'll be sticking with long chain just to be consistent, so I can explain a little bit about what you can do with Lang Chain. So it has things like document loaders and utilities. This makes it very easy to get data outside of files or systems that we want to be able to get that data from, such as a PDF file. We can easily break that up into usable chunks of information that we can use by our models. We have access to a large number of different language models, as I said, are many out there. For me as a developer, there are certain language models that are trained on massive amounts of data, and they're really good at making executable code. So that might be something of interest to me, maybe rather than something more general like a GPT model. But you can pick and choose the models you want. We also have control over our prompts. Now, you may have heard of the word prompt before, so I'll go into a bit of detail about what that includes. That includes the user question what you want the model to do. It also includes perhaps a chat history, previous question answers you've had with that model, or it could include a context. Now, context can be a set of instructions that you can give the model, say be concise. Or if you don't know the answer, don't make something up. Just say that you don't know. And this kind of text input as a context can really constrain the model's behavior. You can even get it to answer you like a pirate if you really want to. We also can do things like chains, so we don't have to be constrained by having one model, or only one model working at one time. We can have multiple models and we can have one model, for example, working in parallel to solve more complicated tasks. We also have inbuilt memory handling because when we extract text, say, from a PDF document, our models can't consume that directly. We often need to convert that into a numerical representation of that text. And we use something called an embedding model to do that. Once we've created that numerical representation, we need to store that. And typically we often store that in something called a vector database. I won't go into the details too much of this, but it's well worth reading into this or some other database types. Now, the real part that makes everything work and takes this to the next level is agents. So I'm going to go into a bit of time explaining what agents are and what they can do and how important they are, and I'll give some examples later on. So agents are a self governing mechanism for a language model. We know that language models are pretty good at creating a sort of detailed bullet point type plan of something. So if you're like that task, I was trying to set up a blog post. It gave me a kind of a list of things I could consider putting in. So we know that good at writing kind of lists. So we use the language model to generate a sequence of actions actions to be able to solve the task you've asked it to do. Now it can not only plan, but it can also do actions. It can actually make do things by having access to tools. So the GPT 3.5 model that has knowledge or world knowledge up to about late 2021. So if you ask it something about recent events, it just won't know. But we can use a tool such as an external API like Google Search to find relevant recent information, and it can provide an accurate answer. So this is an interaction between a language model and an external tool to not only plan, but take action on the user request. We can also do things like user Python or SQL execute. So not only write the code, but it execute that code and return a response. I'll show you this in action later on. So the use of agent solves a lot of the deficiencies in some of these foundation models, like lack of world knowledge. So now I'm going to talk about the power of prompt. So we know the prompt includes the query a context that controls the model behavior and perhaps previous charts. But this can be really, really powerful if you know how to use it. So here's an example. We have just one model. We don't change the model. We don't swap this model out. We have just one language model available. But we can get it to do entirely different tasks by our prompt. So if our prompt says classify the sentiment in the following. Essentially we need to have a classification scheme of the. Whether it's positive or negative reviews on a website, for example, we only need to use Word classify, and it's going to understand that that is the task that it needs to do. Using the same model, we can do question and answering answer the following question. Answer the question based on the following information. And so we've gone moved away from a situation years ago where we needed different models for different tasks, to having one model that can generalize over doing many different jobs. We can take this even further. So here's a really nice study from the Stanford AI group. What they want to do was something a natural language processing. So they created associations between words that were completely unnatural and would never have existed in the GPT models training data. So we're calling a be true to sports, golf and animal, a horse, a plant, a vegetable. And we only give it a few examples of these unnatural associations. And then when we give it a few more things that we'd like it to assign to the correct category, such as monkey or panda, it says plant, vegetable or cucumber and then support. So it's not only understood the task that was classification, but it is also accurately learned from these examples what it needs to do. So this is truly amazing capability. Some of these foundation models that they're able to learn new tasks and do new things. Now I can use this to do something quite practical that you may want to do in a sort of energy industry or oil and gas environment. So quite often companies may have a naming convention for their well bore names. So what I'm going to do is just in the context, explain what I want to do. The following input and output. Converting wellbore names to a specific format. If the wellbore name and in letter convert the letter to match the description. So I give it some examples in the information. I then have input output pairs, just three examples of well names for it to learn from, and then I provide a new well name with a letter at the end. And you'll notice my examples that I gave it initially have no letter at the end, so it hasn't seen any real examples of this. Was able to learn from the context and those examples, and provide me a correct formatted well name at the end. Now, I've taken this to kind of a ridiculous situation. We would never probably have a well-known convention like this, but it would mean that you could examine your documents and always return the well names in the correct formatting, so you would never have to write any code or build a system for this. You can just write it in natural language, in the context, and with just a handful of examples, we can do this kind of thing. Often we call this in-context learning or zero shot or a few shot learning. What about if we want to work with company data? So information models would not be trained on specific, important information that we need to get right when we query it. One of the most popular methods of working with this is called retrieval augmented generation. So we have a user question about a domain knowledge base. We're going to use essentially a search algorithm to retrieve or relevant information from that domain knowledge base. We inject those fine documents into the query, into the context, or with the query into that context. And then the large language model is going to generate its response from that context, from the information we've found that's relevant and the user query and give them a source informed answer. This improves the quality of the results of the answer, and it improves trust, because we can always go back to those source documents. We know where we got that information from, it hasn't hallucinated and we have an audit trail. So this is a good way of kind of building trust in the model's answers. And I can give you an example of this in action. So here I have a simple context. Only use the following pieces of information to answer the question. If you don't know the answer, just say you don't know. Don't make anything up. Use three sentences maximum and keep the answers concise as possible. Now tell me about the lithology and the hydrocarbon types found in the well. And I've supplied it with a PDF. Well, reports I've extracted that embedded the text as a numerical representation in the vector store it has access to. Uses a search algorithm to find relevant pieces of information from that PDF file, and it knows where in that PDF file that information has been retrieved from. It comes back and tells me about the mythologies, about the formations in the well, and a bit about the hydrocarbon type find in that well. So it's really nice. Simple summary. Got the information I needed. I could always ask further questions about that data. Now what about numerical data? We're using language models. Surely they can only handle language, but we can actually handle language data through the use of an agent. Now our agent is going to understand the task that we ask it to do, and it's going to understand the. That needs to write some code to do that. Write the code. Then I have access to a tool to execute that Python code and return an answer to us. So I'm using lang chain here and a Python agent, and I've just wrap this up in a very simple user interface using Streamlit. So I'm going to grab a CSV file of production data that I got from the MPD websites. I can have a quick look at this data in a table here, and then I can use a language model to start asking questions. Interrogate this data so I can ask how many rows are there? It's going to write a small piece of Python, and it's going to execute that Python and return a response to me. I can ask which three fields had the highest sun gas production in 2020. It'll write some Python code to be able to calculate that, get an answer, and write it back to me in a worded way. This means that you could do basic data analytics tasks without knowing any programing whatsoever. Here's the kind of data analyst assistant at your fingertips. We could do something more complicated. Here's some well, data that I took from the forced machining project. Multiple wells. I've got some properties such as prosody density V shale. I want to do do something more complex, calculate the effective porosity for each well, and then find the well with the highest effective porosity. So it's going to write some Python, calculate the effective price t, then find which well has the highest effective property and write that back to me. It's a super nice way of finding which well might be of interest for me. What about getting structured data from text? How can we transform a kind of lots of text information? So here what I'm going to do is I'm going to take a web page. This is empty pages. And I write a Json schema. This is the information I want I want the well name the country the fields the date. ET cetera. ET cetera. And we use something called an extraction chain output. So what we're going to have is that extraction chain. It's going to use multiple instance and model working in parallel to extract that information and fill in that Json schema. And so I get structured data from that unstructured information. Now I did a pretty good job. One thing I got wrong was the country. It said the UK instead of Norway. That's because Norway was not actually mentioned in the text I gave it. So as I said earlier, imperfect but very useful assistance, much faster than I would have been able to do it. And I could always correct this afterwards. So how can we combine all these concepts to have entirely new ways of working with our data? So imagine you have a user interface much like you would have with chat GPT. Behind this is sitting multiple agents based off language models. They have access to APIs that can interact with that data, and they can write code. To do that, we have access to a retrieval augmented system that can find relevant information to your question. And we can extract and transform data, turning it from unstructured to structured information that's easy to consume. And all this sits on top of a varied range of data and applications covering different data types, different data sources. This means we could imagine a workflow like tell me about well, X agents are going to say look at our production data sitting in SQL database, writing SQL query retrieval results, and aggregate those production data to yearly instead of monthly or daily. Go to our report sitting on a local folder as PDF files. Summarize the information in those reports that are relevant for our well and return that back to us. And it could go to our own company. Reports say our our word documents are PowerPoint files and find any relevant information. Then extract that in a structured manner that's easy to work with. So it open up an extremely efficient way of working with a huge number of different types of data covering different sources. So I really hope that I convinced you that large language models are going to change how we interact with our data in the future. We can solve many different problems using one single language model. And for all those examples, I just use the GPT 3.5 model. What makes all this work is the infrastructure that you build around these models, and the design of that is truly key to making this successful. So I hope I convinced you of my theories. -I'd love to hear some questions. -Thank you very much. Thomas, this is a very exciting and new domain for all of us. I guess more important than ever. And your views on the consistency of data here. I mean, for instance, this last example you show with all the wells and extracting unstructured PDF well, report data out of this. Obviously a clean house is the key thing to have a successful result out of this. I have my thoughts on that. But this data management and this preparing for something like this. Have you any thoughts around the importance and how to do this? Of course, we always come back to things like architecture, data quality and how we prepare for this, but we're not. We can also imagine situations where we can use this kind of technology to help us in those tasks of data cleaning, like the example I showed with the well name compared to its intervention. So we wouldn't have to write complex regex formulas. We wouldn't have to be working with string formatting or all that complexity. We could always get the models to return the correct one naming format, or be able to match one simple language processing task. So I think we can work from both sides. We always want to aim for good, clean ground truth data, and I think that's very important to have. But we can also use this technology to help us work with our data in that way too. Exactly. Now, it's certainly an area where if we can if we can combine this numeric with unstructured data, and at the end of the day, within any of the energy businesses we have been talking about and presenting today, it's about getting better results and actually predicting better. I always compare with exploration wells. If we can only make an exploration well in oil and gas 5% more accurate, that's a huge saving for for the exploration business. And if we can do this in a good way, I really think that and I know we will. It's just, again, this having control of the data and making sure that it's consistent so you don't get surprised when the data is wrong. And a good thing as well as we, you can always go back to source with these kind of systems as well. So if you're unsure or if you think that doesn't look quite right, we can always go back to the original data intact. It's not just pulling things out of nowhere. It has a system around that to make sure we have good data quality. Again, thank you very much. It's great to have someone in the studio again, and it was great that you had time to present here. We will now take a short break before we start up with the final presentation today, which I really would recommend for all of you to listen into on on Python scripting and how that is done with on production data. So I'll see you in a few minutes. And again, Thomas, thank you very much for your time and your -presentation. -Thank you. Hi everyone, and welcome back to this final presentation of today's Tech Day. With me now I have Dorsey Bodmer working with the Center of Excellence in Adnoc. And Dorsey will talk about the practicalities and examples around and use cases around using Python scripts in the petroleum engineering as a background, Dorsey has been working with reservoir engineering and applied mathematics for a long time, and has been located in multiple locations around the world, now working at the Center of Excellence in Adnoc. He is also been participating in the same as Luigi on the development of AI, ML, ML in the reservoir. Management and optimization of water flows has been one of the key areas. So welcome, Dorsey. It's good to have you with us, and I'm really looking forward to see how you are working with these particular issues around Python in Adnoc. So please feel -free to start when you're ready. -Okay. Thank you. Hello everyone. I'm the robot has already introduced and I am going to talk about the Python scripts in petroleum engineering and some practical examples to be shared and probably mostly for the reservoir engineering. So I'm. Sorry. It's. Can you see my screen now? Not yet. -No. Can you share the screen? -Yes, I'm sharing the screen. It's about to happen, so we'll get here. That's mine. So, the other one. And then we are ready to go in a second. Yeah, optimizing production is is a key thing in Adnoc like like everywhere else now. And it's it's we see that the recovery balls are becoming more and more important to understand properly. And I believe now we should be ready to start the presentation. So please go ahead. Oh yeah. Okay. So my presentations go back Python scripts in petroleum engineering and practical examples. And I will tell about this a little bit. And what is actually petroleum engineering domain. It's included several disciplines and production engineering, resource engineering and drilling engineering. But mostly in our case I am focusing on reservoir engineering activities. So the reservoir engineers use various methods and tools to estimate the resource and optimize field development and management in the reservoir and the fields. So we also have the domain in production engineering where we use different scripts like virtual metering and reflectors, etcetera. But in this presentation I will talk about the two three use cases that we are using. Probably it's used widely in the world, but I will discuss it in details here. So the first case is a spatial temporal reservoir operation interpolation capacity resistant model. And this last one is a production injection optimization using the Python scripts. So spatial temporal operation interpolation. The use case number one. So the challenge is that the fulfilled model usually is a quite long cycle to update. And history matching is also quite time consuming and cumbersome. So in order to get faster results in for the short term management plan, we need to have instrument which can provide this resolution interpolation in time and space in a more data manner. So and also what a flooded reservoir demands very frequent reservoir pressure estimation because we want to avoid the pressure things and etcetera. And also we need to have some kind of alternative view of simulation interpolation based on the surveillance data. So the proposed solution we have is data driven and machine learning model, which is matched with a measured resolution and based on the production injection data, can fill the gap in time and space to get estimation of reservoir pressure in at any point at any time. So in order to do this, we create some solution which is called interpolation workflow. And it's actually based on only the data driven approach. So we don't use any geological information or whatsoever, just the measured pressure production, injection allocated data parallel and data to convert the production injection into the reservoir conditions. So the solution is actually based on the Python scripts. Actually it's several Python scripts deployed in our Spotfire dashboard. And as a result we got the interpolated reservoir pressure maps, estimated pressure at any time and space and uncertainty evaluation of the pressure. So methodology is a quite simple. So we have the measured pressure. We have production injection cumulative data. And based on the neighbors we collect the data of cumulative production and prepare the data for machine learning algorithm which is doing the estimation, filling the gap in time. And then based on the Radius investigation, it's interpolate the pressure data through space. So based on this data we can get to degrees for every time step and get estimation for the well at any time. So this is kind of improvement of the simple interpolation because it gives us as well as pressure maps. It gives also the flavor of uncertainty and how how much we can how what the difference between the measured pressure and predicted pressure. So this uncertainty can give us kind of the way to improve the rest of pressure management. For example, if we have high uncertainty in one area, we need to do some pressure test in this area. So this is kind of guide the reservoir pressure reservoir management plan in the future. So in the summary for this, I would say that this is an example of how the his the pressure measured pressure is matched with the predicted pressure, and also example of how it was interpolated and examples of predicted and measured pressure data. So based on this we can this Python script can run very quickly and get us in a fast manner for all reservoirs in, in our area. So. I've. I will go for the next use case, which is called capacitance resistant model. Capacitance resistant model is R is a data driven approach to calculate the interval connection between injector and producer. So based on this. We can estimate the connectivity factors between indirect and producer and and provide the optimization for the what the flooding management. So the challenges are that the fulfilled modeling, as already told, requires a lot of efforts in history matching and updating, and sometimes this is usually outdated. And injection location factors are very crucial for the proper water flooding management. So the proposed solution is Python based scripts which creating the capacity this model and based on the capacity model, provide the interval connectivity between injector and producer for for the whole history of the production and injection. So the CRM capacity is just model in theory. It was developed like 15 years ago and if you Google it, you can find a lot of the papers and the, the, the first developer of the CRM think it's a poor so you can find it in one better if you want. But the main idea is that capacity system model is mimicking the electrical circuits. So as an input signal we use injection and the output signal we use production. And the one key factor for the CRM which is defined the applicability of CRM. We need to have the injection rates to be abandoned. So we need to have a lot of injection and production history. And also the injection rate should be different in one. Well. So if, if every if the injection rate is are kind of flat, then we cannot find the correlation that's in CRM will be not useful for us. And also if we have bottom for production pressure then it will be kind of the injection production data plus bottom for flowing pressure gives us some flavor. How actually the injector influence on the producer and the optimization algorithm written in just in regression mode, is a SciPy model, which is using optimization package, and it gives us a estimation of the connectivity factors from injected to producer, which is in this formula highlighted by red color. So. Go further. The workflow is quite simple. We have production injection data. We have data to convert the data into the reservoir conditions. We have pressure data which is flowing waterfall pressure or wellhead pressure. Or if you want, you can use a reservoir estimation if to to have a trend of the of the reservoir pressure and the possible pressure sink and also locations. We're using the location in the middle of horizontal part. If the well is horizontal. And all this input goes to capacity resistant model, which is also Spotfire dashboard with bunch of Python scripts. And the result of this CRM work is injection location factors matched with liquid rates. So after that this matched interval connections goes to production injection position model which is are using for the water flooding optimization. We can use it or. Uh, it's recommended to use. So in summary for serum, I would say that, uh, it's quite good for pattern with flooding. The production injection data should be abandoned and different. And the comparison with stimulus simulation shows good convergence. But it is fast in updating. But sometimes if you have some different view of geology, then stimuli simulation could be different from CRM. So you need to be careful when you applying your decision based on the CRM or based on my simulation, it should be converged. Also, it was applied for some fields, so the pilot and results of interval connectivity was used for the production injection optimization. This is a summary for CRM and the last use case I would like to present. It's a production injection optimization to. Production. Injection optimization is actually a short term optimization based on data driven approach. And also it's not replacing the full fixed fields razor model, but kind of complement it in a way that we got faster results in a much shorter time. So. As well as for CRM and the reserve pressure, the full physics reserve models. It's not very efficient in short term. So and water flooding, reservoir demand continues. Decision to optimize the production injection by optimization of production injection. Mean that we need to inject the volume of water. Or they will and they produced what should be. So we need to inject as much as can produce or produce as much as you can inject to avoid over and over injecting or to avoid the pressure sink in the reservoir. And this optimization is focusing on the balancing the pure well pure pattern. There is a void replacement ratio, which is the. It's better to have 1 or 1.2 in this range in order to avoid the pressure sink, and in order to effectively do the pressure maintenance. So proposed solution is the first optimization based on data driven approach. And also we use a regression algorithm based on the SciPy library. And this results using in a short term like one months or three months prediction mode. So this is the optimization model workflow itself. So we have injection location from CRM which is mentioned earlier. We have potential rates. We have targets. We have targets which is where the simulation for the field for sector or for reservoir. We have also constraints. You can provide the facility constraints. You can provide the reservoir constraint, reservoir target or rate of field, target or rate. It can work with the different reservoirs as well. So in simultaneously also can work simultaneously with several fields. If you have one facility unit for this field and you can add. Uh, any constraints and any. Uh, it's kind of kind of. Currently we have about 10 or 20 constraints for each model. And there are this is running under the. Production and injection optimization with to. In order to get balance per pattern for producing partner and example, you can see that previously VRR was distributed from point 3 to 3. So it's kind kind of odd distribution. And after optimization you see this red bars. This is the optimized VRR pair. Well so you can see it's more evenly distributed and it's closer to one. So so the target of parallel is 1 or 1.1 in our case. So this is objection function which we're using for this production injection. And it's so we calculating the profit and we calculating the offtake and injection which is parameter for balancing the error with the given constraint and giving targets. In summary, I would say that it was deployed in several fields, use this type of optimization model, and actually did a method with constrain and box bounds in it is. It is one of the method using in the SciPy optimization package, and it's also deployed in our Spotfire server. We do good thing that the user can have a web interface and can work with their through the Romo. -Etch. -And also user can actually upload the upload the input data himself by using files and he upload the on the server and then server run these Python scripts and gives a result in no time like. For seconds. It needs several seconds to run. For example, 112 with their 5 or 6 razors simultaneously. And so there is a restoration of razor operation. It provides the restoration razor which leads of increased production rate and increased recovery factor of oilfield. Uh, okay. That's it from my side. Think if I told what I wanted already. Thank you very much for an invitation. Yeah, well, thank you very much for your presentation as well. It's it's really like that summary slide. And then their interaction with, with the users, the end users that you provide scripts and functionality to the end user. Is this something you typically do from the Center of excellence where you are sitting and provide sort of workflows and IDs out to the different business units in Adnoc? Yes. Actually, this workflow we are providing for the for the operating companies from -our center. -So it's very interesting. I've been lucky enough, as I mentioned earlier today, to to visit and see how you work there. And it's it's very fascinating to see that center of excellence and how it's deploying out to the business units sides and and doing reviews. So thank you very much for your presentation. And we're almost at the end of today's session. If I can go back to my PowerPoint presentation for one second, I will share my final slide, the wrap up slide of of of of this presentation. And here we go. And what I think has been a very, very good session, as the plan of this was to first have the high picture session or high level sessions in the in the introduction part of the presentations today, before we went into more technical details, we see that this is an area that will, as I said, exponentially change over the next month as well or years as well. And we see based on the feedback we get and the number of participants that has been joining us through this entire session, that we will definitely continue this. And we plan to have a new. Either a webinar or maybe a physical meeting where we actually can build this one step further moving forward into 2024. And this will only be as much of a success as the contributors to the sessions. So based on what we have done today, and I will definitely follow up on it as well. If any of you have ideas, anything you'd like to discuss and present in a forum like this one that will be covered globally, we will be more than happy to, to, to to to add you to this kinds of presentations. You see that there's an email there that you can join in. And I want to thank each and single one of you that has been with us this day and especially the presenters. A very warm thank you to all of them, and I'm looking forward to meet all of you again very soon. So thank you very much.