Content Team: Professor Urbas, you are currently coordinating KEEN, the AI incubator project. What exactly is behind it?
Prof. Leon Urbas: Excitement is gathering around the topic of artificial intelligence. That means that it is possible to lose sight of what is actually possible. In a sector as complex as the process industry, only few can judge objectively which AI developments will be worthwhile and how they can be implemented. With KEEN, we are looking to create AI tools that enable companies to increase plant productivity or optimize their processes more quickly, for example.
Like a plug-and-play AI solution installed by the plant operator to reduce employees’ workload?
Unfortunately, we are a long way from that. Rather, we’re developing numerous specialized approaches to solutions that we can generalize with perspective and then link to form integrated plant processes. To put this in context, think about autonomous vehicles, which are becoming more prevalent in the automotive industry. They are controlled by AI, and although expensive to develop, the same software can be used in any vehicle. No investment is required for duplication, meaning the economy of scale is immense. The situation is different for the process industry. Many plants’ processes will be similar, but every plant is different. This is because plant design, process execution, and manufactured products are all closely linked. Today’s AI algorithms are unique and expensive. We want to change that.
In which areas can AI support companies in the process industry?
We are working on several topics. The most prominent are process modeling, product properties and plants, and engineering and the realization self-optimizing plants. Let’s imagine you are designing a new steam cracker. You must first be able to analyze and evaluate all of the technical risks of the operation. After all, you don’t want to throw away half a billion euros on a safety concept that cannot be approved. With KEEN, we will reveal how to improve risk analysis processes and adapt them quickly using current pattern recognition and self-learning AI methods.
Another example is during the next plant turnaround. Should you reduce plant output to hold out safely until the next planned shutdown? Or would you rather remain in production during a good period and accept unplanned downtime? These are difficult questions to answer due to the large number of factors and variables. Operators will try to draw conclusions from experience. But can the previous situation really be compared with the current one? Do you know all of the factors? AI incorporates similarities, anomalies, and simulations to make recommendations.
If AI is telling me to lower power to 90%, should I simply do what is asked of me?
Absolutely not! But you raise an important point. Suppose the algorithm provides three options for action but you don’t have background knowledge or context – you aren’t making an informed decision, but a random one. In the process industry we need explainable AI. Algorithms provide transparency on how they created a recommendation. Purely from a legal point of view, this is vital. Let’s say you build a plant that optimizes itself independently. In order to receive approval, you must be able to prove how optimization will take place.
We are noticing a trend. It’s unlikely that people will simply choose one of the proposed options for action. Plant operators want to make good decisions for which they can take responsibility. That’s why there must be dialogue between humans and AI. In normal conversation, people ask for explanation if something hasn’t been fully understood. That is just one of the reasons artificial intelligence will not replace human intelligence. In the best-case scenario, it will serve as cognitive reinforcement – I am convinced of that.
What obstacles are you currently facing?
Firstly, you can only achieve the necessary transparency if you know the origin and properties of the data on which your analysis is based. We have room to improve in this area. Although companies retain data from earlier processes, they are hardly ever linked to the properties of measuring instruments.
Secondly, sensors for data acquisition are expensive and I don’t mean the physical component itself. Change management, the engineering around it, and documentation in a digital twin could come at a high cost. It requires a lot of effort and if it isn’t done correctly, all gathered data could be rendered useless.
Finally, individual applications are driven by performance. The results of algorithms must be available in real time. If not, a problem could be ignored, possibly leading to safety incident or malfunction. Therefore, we have to build scalable systems.
Will artificial intelligence pay off for the process industry?
Absolutely. And the Federal Ministry of Economics in Germany (BWMi), which supported us in the competition agrees. We are convinced that we can build on our good economic performance. The process industry is the third largest economic sector in Germany. By continuing to research and deal with AI, we can better understand data transfer processes that have not been reflected upon. In terms of global competition, it will be crucial to turn our inventions into innovative applications quickly and sustainably. The members of the KEEN initiative are convinced that open innovation processes are best suited for this – even the change process won’t be easy for the process industry.