Chemical Code a blog about simulation and programming

Process Simulation in the Cloud: Opportunity or a Castle in the Air?

Introduction

Recently the DECHEMA e.V. published the position paper Process Simulation – Fit for the future?. This document was written by the ProcessNet working committee Process Simulation, Process Synthesis and Knowledge Processing (part of the larger PAAT community), of which I am a member of. This group presents a cross-section of the German chemical industry and academia for the field of computer-aided-process engineering.

The paper is targeted at software vendors, academic program designers and decision makers in EPC and owner-operator companies in the chemical and pharmaceutical industry.

Although my contribution to the whitepaper was rather small, I am very happy with the result and would like to use this article to share my personal thoughts on some of the addressed topics.

The position paper is driven by the following question

Process simulation is already one of the most important tools in process development, operation and optimisation in the chemical, biotechnology and pharmaceutical industries. But are the existing tools sufficient to meet the demands of the digitalised industry?

If you know my writings, you will remember that this topic is very dear to my heart. There is a lot I could write about this position paper and I will publish smaller articles as soon as I can find time to work on them. Today, I will be talking about an important topic: cloud-native simulation.

Cloud-native process simulation

One of the topics that in my opinion was not formulated explicitly enough is the need for cloud-native software. While several vendors already proclaim to sell cloud-ready software we are still far away from truly cloud-native process simulation software. Just being able to run the Windows fat client on a virtual desktop is not cloud computing in my book.

For me, cloud-native software offers the following features:

A Use-Case Idea

For use-cases I currently have the most hopes in steady-state soft-sensors, as the technology is mature enough to “survive” in the wild and give results reliably. These plant-wide soft-sensors would be used for calculating mass- and energy-balances and calculate equipment internals like temperature and pressure profiles in columns or reactors. Data reconciliation may be used to improve the plant-to-model-match.

Of course steady-state detection would be needed for such soft-sensors, but generally accepted methods exist to scan a timeseries for stationarity, and allow only those datapoints to be sent to the simulation server.

The results of these models could be used to help plant decision makers improve operating conditions (manually, I am not talking about RTO here), or to provide virtual sensor inputs for predictive maintenance models. So far the business opportunities are not very clear to me, which makes it even more important that users can experiment without huge upfront costs or commitment.

Granted, some companies already offer model deployment in a way or another, but proprietary technology, data formats and restrictive licensing schemes are in my opinion the biggest hurdles to use those features on a wider scale.

Summary

So, what are your thoughts on the position paper, and by extension, my thoughts?

Feel free to contact me on my LinkedIn profile to share your thoughts with me. I am always open for a chat about process simulation.