PiControl Solutions
Menu

Multivariable Stepless Closed-Loop Technology for Model Predictive Control (MPC)

Est. Reading: 20 minutes

Steve Howes & Ivan Mohler, Houston, Texas-USA,  [email protected], [email protected]

ABSTRACT

Model Predictive Controllers (MPC) are widely used in chemical, petrochemical, paper, power plant and oil refining industries.  The shapes of the MPC models are the heart and soul of the MPC system.  With time, due to aging equipment, hardware changes, changes in process, operating and economic conditions and process nonlinearities dynamic models can change significantly causing the MPC control quality to deteriorate.  Unfortunately, when MPC models change, or were wrong to begin with, it is difficult to fix models and identify correct models since several MVs (manipulated variables) are associated various CVs (controlled variables). Therefore, MPCs are often turned off with subsequent loss of benefits and profits.

Step-tests and conventional methods of dynamic model identification involve making small step-tests on the slave PID setpoints or valves.  Most MPC models are based on making small steps of about 1 - 3% of the current MV values. If the step changes are too large, this may upset the process and the step cannot be held for too long which results in uncertain process gain of the model.

Many MPC system identification methods do not work well with closed-loop data without any step tests at all and/or slave PIDs or MVs in cascade modes.  They also do not work well with unmeasured disturbances and noise.   

When the MPC is running, it uses models that were identified using the small steps to calculate the trajectory for various MVs in closed-loop mode.  With MPC active, the move sizes are much bigger.  Also, several MVs are changed simultaneously in order to keep the CVs at their targets and this is also different compared to the model identification phase where typically only one MV is moved at a time to avoid complications due to correlations.  These differences lead to model prediction errors and is often the root cause of poor control in many MPC systems.

In this chapter we show MPC system identification technology which can identify multivariable MPC models using data with slave PIDs in cascade mode or completely closed-loop data from MPC systems which are running and making moves to slave PID controllers.  This technology works without the need for intrusive and time-consuming plant step-tests.  It works well amidst fast random noise, medium frequency drifts and slow unmeasured disturbances.  The MPC technology detects and isolates the pattern of unmeasured disturbances while identifying true process models and it allows fixing all known model parameters and reducing system identification uncertainty.

1. Introduction

Model Predictive Control was born in the early 1980s, thanks to Dr. Charles Cutler, who developed the DMC (dynamic matrix control) algorithm and formed the company DMCC (dynamic matrix control corporation).  The birth of DMC was a result of Cutler’s process control experience with Shell Oil Corporation, his ingenious creativity and entrepreneurship.  Prior to DMC, multivariable controllers (MVCs) with many MVs (manipulated variables) and CVs (controlled variables) were designed using numerous tags inside a DCS (distributed control system).  These DCS tags comprised what is called TAC (traditional advanced control), ARC (advanced regulatory control) or DCS-based advanced control.  The DMC approach was a lot cleaner and concise compared to TAC.  Many oil refineries and large-scale olefins plants consisted of a large number of interacting variables and developing a MVC for much easier and better than TAC.  The invention of DMC may be regarded as one of the most important developments events in the history of industrial process control.  DMC and other competitor MVCs have saved billions of dollars worldwide in many chemical plants by throughput maximization, utilities minimization, reducing waste products and byproducts, reducing the number of unplanned shutdowns and securing many other tangible, intangible and monetary benefits.  

2. The new field of MPC

When DMC came around during the early 1980s, there was no other competing or similar tool.  DMC had total monopoly for almost a decade.  DMC acceptance and usage grew exponentially from the mid-1980s onwards.  With tremendous success reported at plants using DMC, other process control companies saw the huge potential opportunity in the MVC arena.  Subsequently, many other MVCs were developed by various DCS and process control companies.  During the 1990s, Honeywell released their MVC called RMPCT (robust model predictive control technology).  Foxboro released Connoisseur.  Emerson released Predict-Pro.  Dot Products (now defunct) released STAR controller.  Treiber Controls (now defunct) released Treiber Controls Software.  The field suddenly became super-crowded with 5-10 MVC vendors.  This type of MVC then got a new name – MPC, which stands for Model Predictive Control.    

3. Clarify different technical names in MPC field

One of the most confusing things in the MPC arena for a new engineer or manager is the large number of names and acronyms.  Let us clarify these acronyms and their detailed meaning.  MVC stands for multivariable controller and includes all software applications that provide MIMO (multiple-input and multiple-output) control capability.  How the MIMO control is provided is not important to be considered a part of MVC.  A DCS-based application comprising of 100 DCS tags consisting of multiple PID control loops, high and low selector constraint overrides, feedforward blocks and other calculation and custom tags can be called an MVC.  A DMC controller is also a MIMO controller, also falls under MVC.  DMC does not use DCS tags but uses its own matrix-based calculations or some proprietary calculations providing simultaneous control of several CVs by manipulating several MVs.  Similarly other products similar to DMC also can be classified under MVC.  So, to summarize, any MIMO system controller can be considered an MVC.  Other commercial products listed above – RMPCT, Predict-Pro etc., are all MIMO controllers and can be considered MVCs.

What is an MPC?  All products listed above – DMC, RMPCT are MVCs, but they fall under the MPC bucket.  MPC is a sub-set of MVC just as DCS-based APC is a subset of MVC.   The uniqueness in MPC is that MPC does not rely on conventional PID tags, feedforward tags, custom logic but they have their own algorithm for providing closed-loop control action.  MPC looks at current CV and MV values, also looks at the past history of MV moves made and the CV predictions and then based on some error minimization algorithm calculates and displays the new trajectory of the MV moves.

Any MVC that does not use PIDs, feedforwards but possesses self-contained model prediction calculations and closed-loop control capability is an MPC.

The differences between MPC and TAC are shown in Figure 1.

Figure 1. Differences between TAC and MPC

4. Why MPC is so important and popular

All commercial MPCs provide the following powerful capabilities missing in the TAC (traditional advanced control) approach.  These capabilities are listed and described below.  These have contributed to the amazing and astronomical increase in the number of MPC applications worldwide since the mid-1980s.  The words “amazing” and “astronomical” are not by any means an exaggeration because, with a plethora of software and hardware products in the process control market today, and with budget restrictions and skeptical managers, it is not easy to sell new products easily.  But the success and benefits of the early DMCs were so resounding that process control managers were almost feeling left behind or a failure if they did not install at least a few DMCs in their control rooms.  The success, benefits, especially monetary benefits were so convincing and widely published at conferences, process control trade shows, magazines and webinars that the exponential increase in the MPC deployment was no surprise.

5. Why many years of tac did not produce MPC success?

TAC knowledge was well known since the 1950s.  When digital computers penetrated the industry after the 1970s, it was possible to implement TAC inside an old industrial computer like those available by DEC (digital equipment corporation), Data General, IBM and Perkin Elmer.  All these computer companies at one time were big but eventually disappeared and are now totally defunct.  Why did TAC inside a DCS or am industrial computer could not gain the same fast popularity like the DMC and other MPCs?  Here are the interesting reasons:

Academic colleges focus on Laplace domain, Frequency domain, stability criteria, a lot of heavy math but do not cover time-domain based methods for examining cause-and-effect data from an Excel file to calculate the dynamic models in the form of transfer functions.

A) New control engineers entering the control room are ill equipped with skills and tools for quickly and correctly identifying transfer functions from process data.

B) Many chemical processes today are highly efficient compared to their older version plant designs made prior to the 1970s.  New plants have high levels of material balance recycling and heat integration.  A heater to a distillation column feed serves as the cooler in another distillation column somewhere.  The Pinch Technology from Bodo Linhoff that became popular during the 1980s resulted in many very thermodynamically efficient plant designs, but at the expense of increased difficulty in operating them.  Now in modern plants, when a single MV is perturbed, there is a cascade effect of complex ripples and repercussions.  These effects make the identification of transfer functions difficult as the shapes of the transfer functions are often not first order.  MPC products come built-in with model identification software that can identify all dynamic models between all MV and CV pairs.  This capability is missing in the office and control room environment if TAC is used.  Using TAC, control engineers have struggled with tools like MATLAB which are no doubt powerful but difficult to use.  A study revealed that most colleges use MATLAB from Mathworks still, some have switched to PITOPS-SIMCET from PiControl Solutions.  The same study showed that very few control engineers in chemical plant control rooms use MATLAB.  Subsequently, without MPC software, there is no easy way for the plant personnel to identify transfer functions between MV and CV pairs. 

C) The above is only one of the challenges and one of the problems in a TAC application.  There are more challenges and problems: Even if the control room personnel could identify the transfer functions, they need to have the skills to correctly design the TAC application with the correct type of DCS tags and get to to work correctly.  Many concepts like PV tracking, setpoint or output tracking, constraint override selectors versus simple signal selectors, unselected versus selected DCS tags are not covered in academic colleges and not clearly explained in DCS manuals that still show equations in the more difficult Laplace domain than the simple time domain.  Plant personnel are unable to design TAC schemes correctly or with confidence.  Many such half-hearted attempts at TAC have resulted in equipment shutdowns or tripping the whole plant itself.  Hence, TAC applications have seen a decrease in popularity and the MPC applications have seen increase in popularity.  MPC provides model identification, integrated method to provide multivariable closed-loop control.  The MPC implementation is so much canned and packaged and with a little help easier to implement compared to TAC especially when the number of MVs is more than 10 and number of CVs more than 15. 

6. Dynamic model characterization

An important component of any MPC is the identification of dynamic models between all MV-CV pairs.  Dynamic models are characterized using two formats: transfer function models and step response coefficient models.  DMC uses step response coefficients whereas most other vendors use transfer function model format.  The transfer function format typically allows zero, first and second order, along with some “beta” factors to allow fitting dynamic responses with inverse response or with an initial fast rise or an initial overshoot rise.  The transfer function models have typically no more than five parameters – process dead time (time delay), process gain, one- or two-time constants and one or two beta factors.  Step response models define the shape of the model curve using individual X, Y coordinates, X is the time after a unit MV step change, Y is the CV value.  See Figure 2 for an illustration.

Figure 2. Step response coefficient model

Notice there are 120 step response coefficients that define the shape of this dynamic model that has dead time of 10 minutes, process gain of 0.2 and first order time constant of 20 minutes.  The first ten coefficients are all zero as this is the dead time.  Then after the dead time elapses, the step response coefficients start increasing (this is the dynamic shape of the model) and then eventually the settling takes place around 0.2 which is the process gain.  With step response coefficient, there is complete flexibility in defining shapes of dynamic models, anything can be easily depicted, including the ugly model shapes shown in Figure 4.

7. Simplification of MPC models

Characterization of dynamic models with complex shapes and strong inverse responses is difficult to do using TAC but can be more easily done in MPC.  Most chemical processes can be characterized by the following transfer function shapes shown in Figure 3:

Figure 3. Common transfer function formats in most chemical processes

One of the selling points by DMCC (dynamic matrix control corporation) to convince plant management to go with MPC, specifically DMC was that DMC can handle even very high order “ugly” transfer functions as shown in Figure 4.  It is true that transfer functions as shown in Figure 4 can be easily represented in MPC, especially in DMC that uses step response coefficient models instead of transfer functions.  But, based on 40 years of industrial MPC deployment and dynamic model examination, such transfer functions as shown in Figure 4 are extremely rare and hence not really worth considering.  It is almost always better to simplify the model by disregarding the complex and inverse shape and using a simple first order or second order transfer function model. The justification and reason for this simplification of using simpler lower order transfer functions is simple: The acceptable rate of change of most MVs in a plant is restricted by the chance of upsetting the plant with extremely high jerky moves in MVs.  Let us examine the complex model on the right side of Figure 4 and let us say we need to increase the target of that CV.  Even if the model as shown in Figure 4 was correct and fit the real process perfectly, to use such a model for reaching the new CV target smoothly, the MV trajectory would have to be jagged and jerky and may upset the downstream units and almost always would be a bad decision.  MPC vendors whose MPCs use step response coefficients instead of transfer functions, still peddle this as an important convincing argument why the customer should use their MPC, but the reality is that it is better to use lower order models, mostly first and second order transfer function (for settling type processes) and zero order (for ramp type/integrating type processes).  The benefit of fitting complex shapes is really not important practically as there are always restrictions on how fast the MV can be moved. 

Figure 4. Complex and "ugly" high-order transfer functions

8. MPC overview

In this section, let us review how MPC works.  There are three main steps in the MPC calculations:

Step #1: Measurement and prediction

Step #2: Feedback correction

Step #3: Time shifting

Consider a flow (MV) to pressure (CV) dynamic model.  Though MPCs are used in MIMO systems, for purposes of ease of explanation we will use this SISO (single-input single-output) case.  Initial flow is 5, initial pressure is 10, not to worry about engineering units.  We want to increase the pressure to 30.  The dynamic model has very little dead time, assume zero for dead time.  The process gain is 10.  Based on the shape of the dynamic model, we can write the step response coefficients to be 2, 6, 8 and 10. 

This means upon a step change in the flow, after 1 second, pressure increases by 2, after 2 seconds, pressure increases by 6, after 3 seconds, pressure increases by 8 and then finally settles with a maximum increase of 10.  A unit change in flow changes the pressure by 10 after 4 seconds.  We are using second as the time unit, the same explanation is valid for fast processes settling in milliseconds or very slow processes settling in hours. 

See Figure 5 – based on the data specified above, if the flow is changed by 2, then based on the shape of the dynamic model (2, 6, 8 and 10), we can predict the future trajectory to be (14, 22, 26 and 30).

Figure 5. Pressure prediction trajectory

However, after one second has passed (one time unit in this example), the measured value of pressure is not 14 but happens to be 12 instead. We predicted 14 a second ago, but after the second elapsed, the measured value turned out to be 12, we have a prediction error of 2.  So, we need to apply a correction as shown in Figure 6.

Figure 6. Apply correction to account for prediction error

So instead of the prediction horizon trajectory to be 14 (after 1 second), 22 (after 2 seconds), 26 (after 3 seconds) and 30 (after 4 seconds), the trajectory is now adjusted to be 12 (after 1 second), 20 (after 2 seconds), 24 (after 3 seconds) and 28 (after 4 seconds).  And finally, the third step is the shifting of the time by one second (one time unit) as one second has elapsed and now the trajectory looks as shown in Figure 7.

Figure 7. Shift in Time Axis

The above illustration was based on a step change in the flow.  Step changes of this nature are called open-loop steps.  If there was a MPC application active, then the MPC has a built-in optimizer that tries to reduce the error between the measured CV values and the target CV values and based on the deviation between the CV measured and target, a move plan will be generated for the MVs to make the CV reach and stay at the CV target (desired) limit.

9. New MPC advances and learnings

MPC has been in use since the early 1980s.  Industry has seen 40 years of MPC applications and usage.  Nothing in the world stays constant, everything keeps evolving and growing and so also MPC.  This section describes some important developments and learnings in the MPC field.

Violation of the Linear Superposition Rule

Process control theory teaches that the outputs from multiple models can be linearly added to predict a CV.  See Figure 8.

Figure 8. The process control rule of linear superposition

During model identification phase of an MPC implementation, typically step tests on the MV of the order of 1-3% of their prevailing values are done independently, one MV at a time.  Moving several MVs at the same time can lead to correlated moves and uncertainty in the model prediction, so typically during model identification by step testing, only one MV is moved at a time.  If the MV value is 100, then a bump to 101, 102, 99 or 98 is typical.  See Figure 9 for a typical move plan on MVs for dynamic model identification.  If the bump is much bigger, then the process can be upset, products could go off-spec and you cannot hold the step for too long.  A big move will need to be accompanied soon with a big move in the opposite direction in order to keep the process variables in safe and desired ranges.  Subsequently, with big MV moves, you cannot see the CV settle, so large steps like 100 to 110 on an MV are not done. 

Models are identified with move sizes shown in Figure 9, notice MV1 has been moved from 100 to 102, then to 98 and then finally to 97, all these moves are small, will not upset the plant and allow for holding for enough time to see all the CVs settle to their new steady state values.

After the MPC is active, in response to a change in CV targets or in response to a major change in a disturbance (feedforward) variable, based on all model gains and shapes, the MPC may decide a trajectory as shown in Figure 10.  Here MV1 moves from 100 to 145 and MV2 from 200 to 233.  These move sizes are much bigger than the move sizes during model identification.  Secondly, during model identification, only one MV was moved at a time, but when MPC is on, several MVs are moved simultaneously.  Research shows that the shapes of models identified using small MV steps and with MV at a time can be different significantly (over 20% in some cases) when MVs moves are much larger and simultaneous.  Significant model prediction errors can be seen resulting in poor control quality and subsequently reduced MPC benefits and possibly unhappy control room operator feedback. 

Figure 9. Typical small step tests conducted during MPC project for model identification
Figure 10. Typical MPC response in closed-loop mode in response to a CV change or disturbance

COLUMBO software from PiControl reads existing MPC models and based on closed-loop data with MPC on, it refits the models.  COLUMBO software is able to overcome model errors due to violation of the rule of superposition.  It is able to spot the differences between models based on small steps with one MV at a time and with MPC running with much larger simultaneous moves.  Many MPCs perform poorly because of using models based on small step tests.  COLUMBO approach is a novel, major advance in MPC maintenance technology and can help to improve the MPC control quality, expected benefits from an MPC project and operator acceptance and feedback. 

Removal of Correlated and Redundant CVs

Many MPCs do not work well due to the inclusion of several correlated CVs.  Let us discuss this problem using a simple distillation column example with one feed flow MV, reflux flow MV and reboiler steam MV.  An MPC is maximizing the feed to push against column flooding and delta pressure across the column.  The MPC is minimizing both reflux and reboiler steam to push against the high limits of the overhear and bottoms impurities.  An ambitious and inexperienced control engineer typically puts several tray temperatures on the column body as CVs in addition to the overhead and bottoms purities.  The problem is that the temperatures and the purities are highly correlated.  Temperatures will have faster models than the purities.  Observation of the temperatures and product purities will reveal that it is impossible to go off-spec on the product purities in case the column temperature profile is well controlled at the tight target.  Instead of complicating the MPC design with a crowded control matrix, it is often better to only include the most appropriate temperatures representative of the column composition profile.  The temperature targets can be automatically adjusted based on a model outside the MPC between the temperatures and the product purities.

Simplification of strong multidirectional MVs.

In process control courses at colleges and in process control textbooks, the concept of decouplers is always taught after a lesson on feedforward control.  A decoupler application is shown in Figure 11.

Figure 11. Decoupler Application

The decoupler is a nice application to teach on process control in a classroom but most attempts to implement such a scheme in real plants on real processes is always challenging and often a failure.  This is because if the dynamic models are perfect, then the decoupler can work well.  But in real life, models are never perfect, there are non-linearities, sticking control valves, unmeasured disturbances, and many unknowns.  If implemented as a DCS-based APC application (TAC), model errors, changes in models as a function of some other parameters always causes the decoupler tags to fight with each other producing oscillations and poor control quality.

With the advent of MPC during the early 1980s, there was a renewed spark and hope that at last with MPC (since MPC considers all models in an integrated manner inside the controller matrix), MPC control for such decoupler applications would be far superior.

However, research and industrial MPC experience shows that MPC control is also fragile when it comes to strong multiple decouplers, and any attempts to implement a strong multiple-way interaction can also be accompanied by interaction, fighting and oscillations, severely damaging the MPC control quality.

The trick here is to weaken or zero out the second path and rely on the most important or most dominant model, make the other path weak or just delete it completely.  Then in the MPC controller tuning, allow one MV to move fast and control the most important CV or CVs tightly and allow the second MV (part of the strong interaction) to move slowly.  This way, the interaction and fighting can be reduced.

Auto Self-Decoupling using Integrated Absolute Error

Another interesting and powerful method to improve control action in an MPC where there is direct strong multi-way interaction is to use the new Auto Self-Decoupling rule.  Let us study this rule using a 2X2 decoupler control scheme.  Figure 12 shows a PID tuned using the IAE (integrated absolute error) criteria to achieve a new setpoint of 60 from 50. 

Figure 12. PID tuning using Integrated Absolute Error criteria

When tuning PIDs in a real plant, the IAE criteria tends to be too aggressive as it is based on minimizing the error between the PV and setpoint.  The PID’s output can see a huge sudden rise (called proportional kick).  If the IAE-based tuning is adjusted to make the controller aggressiveness to be around half of the full blown aggressive IAE-based tuning, then the tuning is often good on the real plant.  Now mimicking this approach, if the MPC MV trajectory can be made to mimic this half-IAE PID response, then in many cases, the fighting decoupler causing oscillations can be totally deleted and just the primary MV-CV models without the decouplers work better and this concept is called auto self-decoupling.  This is a great way to improve MPC control performance by removing multi-way interactions that can always increase chances of fighting resulting in oscillations and poor control.

Simplifying Complex Models – delete Beta factors   The real and precise model may have a shape as shown in Figure 13 – the “True Model”.  The model displays a strong “Beta” factor, with a fast significant rise followed by slower decay to the new steady state with process lower gain.  Using such models with a strong initial rise with a high dynamic gain may complicate the closed-loop performance of an MPC system.  It could lead to some CV oscillations or ripples in the MV response.  It may needlessly move the MV up and down trying to predict and control the CV better, but this will always be at the expense of undesirable MV movement.  Practically, even if you are confident, you have an excellent and accurate model, the “True Model” in Figure 13, it is better to downgrade and simplify the model to the “Preferred Model”.  This simplification will lead to a more stable and robust MPC without compromising any MPC control quality nor MPC benefits. 

Figure 13. Reducing high beta models to first order models

PID Tuning Configuration, Monitoring and Optimization

An MPC may have (say) 10 MVs that write setpoints to ten slave PID control loops.  But the whole plant may have 400 PID control loops.  Though only ten PID control loops are directly connected to the MPC, interactions with other 390 PID control loops by way of being a neighboring interacting PID control loop, indirectly affecting and interacting with the MPC slave PIDs via heat integration, mass balance recycle, or other direct or indirect effects can impact how the MPC works.  Experience and research show that MPC is often considered as utopian magic and because of the MPC, PID control loop configuration and tuning are no longer important.  This can be a big mistake.  Modern plants are very interactive by nature due to heat balance and mass balance integration.  A seemingly unimportant PID control loop in an obviously peripheral section of the plant can produce ripples and these can somehow impact other PIDs and eventually produce ripples in the MPC.    

Online PID control monitoring tools like APROMON (Advanced Process Monitoring) from PiControl Solutions and PITOPS (Process Identification Controller Tuning Optimizer Simulator) are simple, fast, practical and list all PID control loops having high error, oscillations, control valve problems, sensor problems – they display over 25 control and statistical criteria and will help to properly maintain and optimize the base PID control loops, not just the PID loops connected to the MPC but all PID control loops in the entire plant.

Closely related to PID control loop tuning and monitoring are important relationships and dynamics between certain variables.  Certain variables need to be in the DCS, part of DCS-based APC.  There are various reasons why a few variables are better off in the DCS layer rather than an MPC.  A good example is if most MPC variables have a very long time to steady state (settling time) but a few variables have relatively much shorter settling time, then the faster variables might be better off in a DCS layer.  Experience and process knowledge need to be used in deciding which variables need to be in MPC and which need to be in the DCS layer.  The performance of many MPCs is poor because of including all variables in the MPC believing that MPC will outperform PID control action.  An optimal blend between DCS PID control loops and MPC variables based on process dynamics, characteristics and knowledge will help in making the right selection.

Simple and Compact Operator User Interface

For MPC success, the needs of the control room operator need to be seriously taken into consideration.  The operator is the one responsible for running of the plant.  If he does not understand the MPC user interface well or is overwhelmed by the large number of variables, fields and trends, this could result in low MPC usage factor.  The operators’ job is difficult with numerous things happening in real-time.  MPC must be designed to be robust and reliable, must stay ON easily with minimal need for looking at the MPC user interface.  A good operator – MPC user interface will have just the minimum number of CV target limits he needs to change, e.g., production rate target, which needs to be changed on a regular basis depending on the market conditions, economics and tank levels.  The CV target limits for (say) a high temperature constraint limit or the maximum allowable position of a control valve never change and ought not to clutter an operator – MPC user interface.  In addition to the CV targets that need to be changed frequently, there needs to be the MPC On/Off switch and a message(s) showing MPC status or some useful diagnostics.  If MPC is turned Off automatically due to bad values or some software issue, then messages informing the operator that MPC has been turned Off need to be generated immediately.

10. Summary

MPC usage has seen almost 40 years.  Many useful learnings have been summarized in this chapter.  The most important components of the MPC are the dynamic models.  Nothing is more important than the quality of dynamic models.  Good and accurate models is 85% the basis and requirement for MPC success.  Tuning and optimization the base layer PID control loops are important and are missed opportunities in modern MPC applications.  Complicating the MPC control matrix with too many complex models which are likely to be nonlinear or using models with very low gains relative to other variables can also cause MPC deterioration.  The information provided in this chapter if applied to an MPC project will help to improve the MPC performance, onstream factor and help maximize the monetary, tangible and intangible benefits in the plant.

Leave a Reply

Your email address will not be published. Required fields are marked *


magnifiercrossmenu