PETROCONTROL Advanced Control and Optimization What is advance process control? By Y. Zak Friedman, PhD Principal Consultant th 34 East 30 Street, New York, NY 10016 • 212-481-6195 • Fax: 212-447-8756 • Zak@petrocontrol.com A stream of email that followed my January and February editorials [1, 2] has convinced me that an APC tutorial would be beneficial. First a conceptual discussion: what does APC attempt to do and how it makes money? At the first level APC aims to produce products at target qualities, while keeping the unit within constraints. Handling disturbances such as crude switches, coker drum switches, FCC feed switches, ethylene cracker furnace starts and stops, is no small feat, and APC that can keep product qualities steady during these disturbances eliminates product downgrading and reduces the potential for incidents. Moreover, this basic task is a prerequisite that must happen before we attempt further optimization. Optimization involves moving the unit up and down against constraint, and APC must keep the product qualities constant during this self inflicted disturbance, or else optimization becomes counter-productive. One cannot overemphasize the warning that unit optimization is not to be started before the APC can handle quality control in the presence of disturbances. But product qualities, as well as certain important constraint variables are not measured, and if we are to push the unit against real constraints we must calculate the unmeasured control variables inferentially. Unmeasured constraint variables are typically column tray loading, rate of catalyst coking, etc. In the past it was common to rely on on-stream analyzers for measurement of product qualities but analyzers, in addition to being expensive, require maintenance. We used to have an unofficial standard of about two man-weeks per year per analyzer for maintenance, but most refineries are no longer willing to dedicate that amount of labor and analyzer reliability dropped to the point that it may be unsafe to use certain analyzers in closed loop. On the strength of level 1, APC level 2 aims to maximize the usefulness of the unit in question, taking the unit to maximum throughput (or another key economic drive, but to simplify the discussion I would continue to refer to throughput), again while keeping the products at economical quality targets. Ignoring for a minute the dynamic difficulties of operating the unit, level 2 is easy to achieve. APC would nudge the feed higher and higher until one of the unit constraints is met. Is that a big deal? After all, the operator can also maximize throughput to a pump limit or another constraint. Still, APC handles the dynamics of constraint pushing better than the average operator, and it can typically increase throughput by 1 – 2% as compared to an average operator, more, if the constraints are dynamically difficult to control. APC level 3 is trickier. There are usually enough degrees of freedom to move the unit in such a direction as to alleviate the active constraint. For example consider the trade off between reactor throughput and severity. We could reduce severity, lose some yield and increase throughput even more. In some cases the economics of making such a move are straight forward and do not change with time. Then we can easily incorporate constraint relieving logic into the application. In other cases economics change from day to day, and the unit behavior is also not constant in time. Thus APC third level is only partially achievable, and the degree to which it is achieved has to do not only with changes of economic directions but also with the strength of application design and implementation. May, June, July 2005 editorial: APC tutorial, page 1 With the development of fast computing and rigorous unit models, came the notion that rigorous models can precisely estimate the effect of reducing reactor severity on the unit, determining whether severity should be decreased or increased. I would name this technology RETRO (real time rigorous optimization) and stay away from commercial names. Initially this seemed an excellent idea and those of us enamored with chemical engineering models, myself included, thought that while there are problems making this technology work, in the end it would be a reasonable way to optimize a unit. This may still be correct in the remote future, but RETRO as it used now has not been productive, and I have written papers and editorials [3, 4, 5], advising people to hold off on RETRO for now. There is no point repeating the discussion of the many problem, except to add that even if one is optimistic that the problems are not insurmountable, there is still the question of do we want to spend 90% of the money and manpower resource to achieve the last 20% of benefits? The rest of this editorial ignores RETRO because the way it is presently being implemented is not productive. We now leave the philosophical concepts and go into the structure of a modern APC application, shown in figure 1. At the heart of this application is an MVPC (multivariable predictive controller), which reads all unit constraints and sets the manipulated variables. Two and three decades ago we used to make a distinction between constraints and operating targets. Operating targets were typically product qualities, measured by analyzers, and those were to be kept ideally on targets. Constraints, on the other hand, were to be kept always below (or always above) targets. APC applications worked to satisfy operating targets while maximizing throughput against constraints. The control logic was configured on a host computer as a mixture of control block structure plus custom code. When Industry moved to standardize on MVPC controllers, the distinction between targets versus constraints blurred and they all became control variables with minimum and maximum limits. APC practitioners still tried to imitate the old approach by setting narrow ranges for variable with operating targets, however, MVPC’s, especially large ones with many models, often became unstable with narrow ranges and while the better applications do work with narrow ranges on target variables, the trend has been to widen the ranges. The stability problems have to do with MVPC’s ability to predict future behavior of control variables. MVPC’s dynamic models are obtained experimentally by step testing the unit in the presence of feed quality drifts, weather changes and other uncertainties, which often make it difficult to obtain good models. Secondly, MVPC’s employ linear models to predict behavior of nonlinear processes, and models obtained at certain operating conditions are liable to be wrong at other conditions. Thirdly, MVPC’s do not support cascade structures, so the stabilizing influence of cascade configurations cannot be taken advantage of. For example a cascade of property inference – to tray temperature – to reboiler heat duty controller – to flow controller, can be accomplished only if the tray temperature controller is a manipulated variable, whereas the temperature TC -- to duty -- to flow cascade would be configured in the DCS. Many MVPC implementers would skip the temperature and heat duty controllers because of complexity and set the flow as a manipulated variable. May, June, July 2005 editorial: APC tutorial, page 2 Academia should perhaps be called to task to explain why MVPC technology has changed so little in the past 30 years. Why is it not possible to include the temperature and heat duty of the example above as intermediate variables? After all, the exclusion of cascade from MVPC technology is not because of a fundamental reason but only because that is a specific problem with the MVPC structure in use today. There seems to be a promising way to address model nonlinearity problem, via the use of rigorous or semi-rigorous process models to predict MVPC model gains scientifically. The improved accuracy would be of great help because it would not only enhance stability but also permit a more precise level 3 constraint balancing. Ideally we would compute those gains in real time as function of operating conditions, update the MVPC model and thus effectively linearize the MVPC model around the current conditions. Each process gain of the MVPC dynamic model is a partial derivative of the rigorous model. There are no iterations involved, nor convergence problems, just the creation of a Jacobian matrix of partial derivatives. Older MVPC software did not permit changes on the fly, but current MVPC’s separate gains from dynamics and can accept at least gain changes. Honeywell has done some interesting initial work in continuous updating of model gains by rigorous simulation [6], but later the Honeywell modeling group was sold to KBC and to my knowledge this development has been discontinued. We would welcome a comment from Honeywell about this issue. Today, we must accept that the MVPC by itself works on wide ranges and its main task is to keep the unit operating within an operating envelope. What gives MVPC the added value is a small optimizer SALP (small approximate linear program) that determines which of the constraints are to be pushed against. SALP is an integral part of every MVPC, and its main function is to calculate manipulated variable steady state values to narrow the control ranges on variables with genuine operating targets. As oppose to the MVPC, which should act aggressively if limits are violated, SALP nudges the manipulated variable to their near optimal position, thus achieving the operating targets without losing stability, albeit slowly. This permits the application to first meet the requirements of APC level 1, and second, push the throughput up while satisfying the operating targets. SALP also attempts the APC level 3 constraint balancing: alleviate active constraints based on some rudimentary economic rules, making room for more feed, though that function is more problematic. SALP is driven by a steady state model, which uses process gains of the MVPC dynamic models, plus prices set on MV’s and CV’s. On paper SALP could be constantly updated with the economics of the day and then it would correctly optimize the unit, but that is not commonly done. Changing the performance function of SALP daily is too labor intensive. Further, for economical optimization to work correctly the unit behavior models must also be accurate, and linear empirical models do not come close to the accuracy needed to obtain detailed optimization. One might say that while there is no economical optimization, SALP sets priorities to balance and relieve certain constraints over others. Does approximate constraint balancing make money? Reconsider our example of reducing reactor severity to alleviate throughput constraints and then pushing the throughput higher. If such a decision is valid all the time, or even seasonally, then approximate constraint balancing makes money. But if the validity of such a decision May, June, July 2005 editorial: APC tutorial, page 3 varies day to day then SALP should leave the severity decision to the operator. There are trade-offs in every unit that are more or less always valid and thus simplified constraint balancing can make money. Having said that, the APC engineer must always be there to check whether the unit is being pushed in a reasonable direction. It is all too easy to lose money by pushing APC in the wrong direction. I keep referring to SALP as linear and that is not entirely true. Most products have QP (quadratic programming) ability, but since we do not usually update the economics – there is no incentive to add the QP complexity. While MVPC and SALP are standard tools that can be made to work with good engineering, inferential models of unmeasured control variables are not standard and hence more problematic. High fidelity inferential models are essential for the success of APC because as SALP attempts to push the unit against constraints it is necessary for the operator to know that products are on spec, columns would not flood and catalyst will not deteriorate quickly. What is the point of pushing a reactor to high severity if that would cause premature catalyst deterioration? Good operators have inferential knowledge, not in a mathematical form but as pattern recognition; however, APC requires a mathematical form. Our industry by and large has made the mistake of replacing operator knowledge by regression models for the inferences, and my February editorial [2] explains why that is not a good idea. I do not understand why Industry has failed to address this important issue. After all, what is an inferential model? The patterns that operators try to maintain indicate that there are chemical engineering relations between measurements and product qualities. The patterns may be incomplete, meaning some key measurements are missing, and in those cases controlling the unit is quite difficult. As a part of developing the inferential models one must identify those missing measurements to improve controllability. I must pause here and issue a statement about my involvement in inferential models. People have accused me of being self serving when speaking in public about the need for first principles inferential models. That is not true and I would not abuse my editorial position to say anything I do not believe in. I started dealing with inferential control problem many years ago for three reasons: personal, by necessity and one commercial reason. The personal reason is love of chemical engineering models. I could and have used simulations and engineering models in a variety of applications not related to inferential modeling. The necessity reason showed up while working on APC applications and I had to come up with inferential solutions to make them work. Upon starting Petrocontrol in 1992 I discovered the commercial reason: there is a great need for good first principles inferential models and very little competition. That is still our situation and one might say that by pointing out this need and even suggesting ways to achieve good inferential models I am encouraging competition, rather than suppressing it. Well designed modern APC applications employ inferential models even where reliable analyzers exist. Inferential indications typically lead analyzer readings by one hour, and that is a significant dynamic control advantage. If MVPC’s could take a cascade structure it would have been ideal to set the analyzer as a primary controller and the inference as a secondary slave controller, but as that is not feasible, the accepted May, June, July 2005 editorial: APC tutorial, page 4 practice is to use the inference as CV whereas the analyzer is used to slowly update an inferential bias via a Smith predictor like algorithm. That is the end of our APC tutorial. To summarize, there are three main pieces in this puzzle: MVPC, SALP and inferential models. MVPC and SALP are packaged software, which may be imperfect but has the advantage of a standard approach. Inferential models are not packaged software, and that makes inferential models the most crucial key component that could make or break a project. Between the lines I tried to also discuss the task of the APC engineer. Given the loose ends of APC technology, it is not easy to accomplish a successful APC application. APC engineers must be thoroughly knowledgeable in the unit chemical engineering, operation and economics. He/she must stay with the application after commissioning, dedicating perhaps 30% of their time to each major application. Any application without attention would deteriorate rapidly. In that respect the fourth piece of this puzzle is the human element and level of support given to working APC applications. References 1. Friedman, Y. Z., “Has the APC industry completely collapsed?” editorial in Hydrocarbon Processing Journal, January 2005. 2. Friedman, Y. Z., “more about inferential control models”, editorial in Hydrocarbon Processing Journal, February 2005. 3. Friedman Y. Z., “What’s wrong with closed loop optimization?” Hydrocarbon Processing, October 1995. 4. Friedman Y. Z., “More about closed loop optimization”, editorial in Hydrocarbon Processing Journal, August 1998. 5. Friedman Y. Z., “Closed loop optimization update”, editorial in Hydrocarbon Processing Journal, January 2000. 6. Nath, N., Alzein, Z., Pouwer, R., Lesieur, M., “On-line Dynamic Optimization of an Ethylene Plant Using Profit Optimizer”, NPRA Computer Conference, November 1999. May, June, July 2005 editorial: APC tutorial, page 5 Fig 1. The structure of a modern APC application Desired MV SS values SALP OPTIMIZER MV and CV ranges SS model gains Infrequent economic updates INFERENTIAL MODEL Inference inputs Unmeasured CV’s MVPC CONTROLLER MV and CV ranges by operator Measured CV’s TC 1 FC PC TI FI LI PI TI FI PI © Petrocontrol
© Copyright 2024