دانلود مقاله ISI انگلیسی شماره 4788
ترجمه فارسی عنوان مقاله

چند مدل مبتنی بر زمان واقعی محصول نهایی استراتژی کنترل کیفیت برای فرآیندهای دسته ای

عنوان انگلیسی
Multi-model based real-time final product quality control strategy for batch processes
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
4788 2009 12 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Computers & Chemical Engineering, Volume 33, Issue 5, 21 May 2009, Pages 992–1003

ترجمه کلمات کلیدی
- کنترل کیفیت - بهینه سازی - کنترل مدل پیش بینی - طراحی آزمایش - عملیات دسته ای
کلمات کلیدی انگلیسی
پیش نمایش مقاله
پیش نمایش مقاله  چند مدل مبتنی بر زمان واقعی محصول نهایی استراتژی کنترل کیفیت برای فرآیندهای دسته ای

چکیده انگلیسی

A novel real-time final product quality control strategy for batch operations is presented. Quality control is achieved by periodically predicting the final product quality and adjusting process variables at pre-specified decision points. This data-driven methodology employs multiple models, one for each decision point, to capture the time-varying relationships. These models combine real-time batch information from process variables and initial conditions with information from prior batches. Design of experiments is performed to generate informative data that reveal the relationship between process conditions and the final product quality at various times. Control action is also taken at pre-specified decision points; at these times, the manipulated variable values are calculated by solving an optimal control problem similar to model predictive control. A key benefit of this strategy is that missing data imputation is obviated. The proposed modeling and quality control strategy is illustrated using a batch reaction case study.

مقدمه انگلیسی

The batch mode of production is common for manufacturing many value-added products such as pharmaceuticals and agro-chemicals. For a specific batch production, the possible synthesis routes have already been investigated by the chemists and a recipe and operation mode selected. During each batch, one usually needs to just follow the pre-specified procedures and establish the prescribed process conditions; these will be repeated batch after batch. Batch processes usually suffer a lack of reproducibility from batch to batch due to changes in raw material purities, variations in initial conditions, and disturbances. These changes are inherent in the processes and may be difficult for operators to discern a priori, but could have an adverse effect on the final product quality. Online monitoring of process variables for rapidly detecting abnormalities and taking remedial actions is therefore essential to ameliorate the effects of such changes and to produce on-spec products from each batch. Statistical process control (SPC) has been used to ensure batch product quality. Numerous SPC approaches have been reported. Of these, the use of principal component analysis (PCA) and partial least squares (PLS) for batch process monitoring has been extensively investigated. In such approaches, the behavior of the process is characterized using a statistical model derived through multi-way analysis of online measurements obtained when the process is in a state of statistical control. Subsequently, future unusual events are detected by projecting the process measurements against this “in-control” model (Doan & Srinivasan, 2007; Kourti, Nomikos, & MacGregor, 1995; Nomikos & MacGregor, 1994; Wise, Gallagher, Butler, White, & Barna, 1999; Wold & Sjostrom, 1998). Through such monitoring, an abnormal batch can be detected online, without waiting for the final quality to be measured at the end of the batch. Even though online process monitoring can detect abnormality promptly, the separation between normal and abnormal batches in terms of product quality is often ambiguous. Quality variations even among normal batches can be quite significant; some abnormal batches can be rectified by appropriate remedial actions during the batch. This motivates the development of with-in batch recovery schemes for final product quality control. A number of approaches have been developed to reduce the variation in product quality. One of the most popular approaches adjusts the operating condition of a new batch based on data collected from previous batches in an attempt to bring the new batch's final quality close to the desired target. In such batch-to-batch control strategies, if the end-of-batch quality measurements consistently show a statistically significant deviation from the nominal case, the operator would identify the cause and adjust the operating conditions for subsequent batches (Crowley, Harrison, & Doyle, 2001; Dong, McAvoy, & Zafiriou, 1996; Edgar et al., 2000; Flores-Cerrillo & MacGregor, 2003; Ott & Schilling, 1990). The common characteristic of these approaches is that the correction is made for a new batch as a whole, thus it is an offline quality control strategy. In another approach, a reference trajectory is recommended for all batches. The online control objective in this scheme is to maintain the operating conditions as per the reference, even in face of disturbances. The reference trajectory is determined a priori by optimizing an off-line process model, derived either from first principles or from mining historical data (Clarke-Pringle & MacGregor, 1998; Srinivasan, Bonvin, Visser, & Palanki, 2003; Srinivasan, Palanki, & Bonvin, 2003; Vander Wiel, Tuker, Faltin, & Doganaksoy, 1992). Even though this approach is effective in rejecting some disturbances, it may not always produce on-spec products at the end of the batch, even with perfect tracking control of process variables (Russell, Kesavan, Lee, & Ogunnaike, 1998; Russell, Robertson, Lee, & Ogunnaike, 1998). This is because significant batch-to-batch variations could arise in raw material impurity profiles or process parameters (kinetics, heat transfer, etc.) whose effects are not included in the nominal model. In many cases, the raw material has large stochastic variations originating from prior processing steps and the regulatory model cannot sufficiently capture their subtle effects on final product quality. Hence, the implementation of an off-line calculated trajectory does not guarantee optimal batch performance. It is possible to express batch quality control as an optimization problem, where uncertainty in parameters and disturbances are considered (Srinivasan et al., 2003). If an accurate process model is not available, a robust optimization strategy can be used to derive process inputs, which once implemented would drive the final quality within specs. This approach usually produces a conservative solution. A measurement-based optimization scheme that tracks the necessary conditions for optimization can both cope with uncertainty and lead to a less conservative optima. However, this relies on appropriate parameterization of input profiles to satisfy the necessary condition of optimality. The central assumptions here are that the set of active constraints is known a priori and the set does not change due to the process uncertainties, i.e., the structure of the optimal solution of the true system is known a priori. Mid-course correction (MCC) strategies are used during a batch's evolution in order to reduce variations in final quality. These strategies recognize that process conditions during the batch will tend to dominate systematic batch-to-batch variations (Flores-Cerrillo & MacGregor, 2003; Kesavan, Lee, Saucedo, & Kishnagopalan, 2000; Russell and Kesavan et al., 1998 and Russell and Robertson et al., 1998; Yabuki & MacGregor, 1997). In this approach, online measurements at some midcourse points are used to predict the final product quality. If the predicted quality deviates beyond a statistically defined “in control” zone, a model is used to calculate the control move which would bring the batch back to statistical control. The success of such schemes is mainly dependent on the quality of the model. Usually an inferential model for quality prediction is developed using historical data. For process recovery purpose, the training data needs to contain sufficient input variability and disturbance information to allow proper model identification, i.e., the model identification requires persistency of excitation. Although historical batch information can be used for model development, the control inputs in such data often do not have sufficient excitation since they were selected based on optimality for a specific batch. Another significant issue while using models for quality prediction and control in MCC approaches is that the online measurements that form the basis for quality prediction are incomplete, i.e., all the data necessary for predicting the end of batch quality becomes available only when the batch has finished. The conundrum from the absence of future data is usually solved in an ad hoc fashion, e.g., by using data imputation methods, or assuming a known correlation between the available measurements and future ones. Such imputation inevitably results in additional uncertainty during the prediction and control task, especially at early stages of the batch when limited data are available (Nelson, MacGregor, & Taylor, 2006). In this article, a data-driven real-time batch quality control strategy, similar to model predictive control, is developed. With a control setpoint on end-of-batch quality, process inputs are manipulated online to steer the process so that the current batch evolves to result in the desired quality. The main difference from previous approaches is that control actions are taken at discrete, pre-specified time points. At each time point, a data-based model is used for quality prediction and control calculations. Different models are used at different points; each model exploits all online data available up to that point. This multi-model strategy eliminates the need for data imputation and the consequent uncertainties. The rest of the article is organized as follows. Next, an overview of the proposed framework is given. In Section 3, the data-driven modeling strategy is discussed. A latent variable method is proposed for relating process conditions to quality variables and for predicting final quality in real-time. The real-time batch quality control framework is described in Section 4, where the problem is formulated as an optimization. In Section 5, the proposed strategy is illustrated using a simulated batch process.

نتیجه گیری انگلیسی

A novel multi-model based real-time batch quality control strategy for batch/semi-batch processes has been proposed. The proposed multi-model structure steers the end-of-batch quality by prediction and control only at pre-selected decision points rather than at every sample. A different model is used for prediction and control at each decision point—this serves as an easy, yet adequate means to predict final quality during process evolution. It also obviates the need to account explicitly for discontinuities, nonlinearities, and future actions. Real-time control of quality is obtained from an online control optimization that is similar to MPC. This optimization-based control scheme ensures that the calculated changes in process variable trajectories are feasible. The proposed scheme offers several advantages. Foremost is its simplicity that eliminates any explicit consideration of the future evolution and differences in batch duration. Most online batch supervision approaches require imputation of the values of future measurements that will affect the final product quality. They also resort to an explicit method for synchronizing batches of different lengths, which comes with concomitant complexities. The proposed scheme precludes these. In contrast to other strategies, the proposed approach does not need all variables to be available at all decision points since different models are used at each point. Hence, only data (for example lab measurements) which would be available at that juncture has to be incorporated thus obviating the missing data imputation step that would have been essential if a single model was used for the entire batch. The proposed approach is generic and can be applied to any batch process. Even though a linear PLS model has been used here, the proposed multi-model strategy is not limited to this type of model—other model structures such as nonlinear PLS or artificial neural networks (Srinivasan, Wang, Ho, & Lim, 2005) can be employed as well, if necessary. Dynamic models that account for time lag in predictor can also be used if they offer a better representation of the underlying dynamics during a phase (Doan et al., 2007; Srinivasan, Wang, Ho, & Lim, 2004). Further, the proposed scheme bootstraps on the extensive literature on model predictive control, especially for controller design, tuning and manipulated variable computation. These benefits originate from the relaxation of control frequency—from once every sample to once every decision point. The suitable choice of decision points does play a key role. However, decision points can be specified not only in the form of samples (i.e., time-events) but also through state-events such as the values of indicator variables. We will illustrate these avenues offered by the proposed strategy in future communications. Our future work will also develop techniques for refinement of model through a batch-to-batch parameter adaptation strategy.