دانلود مقاله ISI انگلیسی شماره 23878
ترجمه فارسی عنوان مقاله

یافتن تجارت کردن بین رویت و اقتصاد در عیب یابی از فرایندهای شیمیایی

عنوان انگلیسی
Finding a trade-off between observability and economics in the fault detection of chemical processes
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
23878 2011 10 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Computers & Chemical Engineering, Volume 35, Issue 2, 9 February 2011, Pages 319–328

ترجمه کلمات کلیدی
رویت - تشخیص خطا - متوسط ​​طول دنباله - اهمیت اقتصادی
کلمات کلیدی انگلیسی
Observability, Fault detection, Average run length, Economic significance
پیش نمایش مقاله
پیش نمایش مقاله  یافتن تجارت کردن بین رویت و اقتصاد در عیب یابی از فرایندهای شیمیایی

چکیده انگلیسی

This paper presents a methodology to quantitatively gauge the potential economical loss due to unobserved faults when standard statistical monitoring charts are used. It is shown that in closed loop operation, a shorter time for detection may result from retuning the controller at the expense of higher product variability. Accordingly, an optimization approach is proposed for finding a trade-off between the economic losses resulting from lack of detection and losses resulting from higher product variability. In order to account for faults with different frequency contents, the method is applied in the frequency domain. The proposed optimization based methodology is later validated in the time domain.

مقدمه انگلیسی

The need for efficient and profitable operation in chemical industries requires the use of efficient process monitoring strategies. Venkatasubramanian et al., 2003a, Venkatasubramanian et al., 2003b and Venkatasubramanian et al., 2003c, emphasized that the petrochemical industry looses over $20 billion per year due to inappropriate reaction to abnormal process behavior. Thus, faults have a serious impact on process economy, product quality, safety, productivity and pollution level. A fault may be defined as a deviation of at least one variable from an acceptable level (Isermann, 2006). The survey papers (e.g. Gertler, 1988, Himmelblau, 1978, Isermann, 1984 and Willsky, 1976) provide a summary of early work in this area and (Venkatasubramanian et al., 2003a, Venkatasubramanian et al., 2003b and Venkatasubramanian et al., 2003c) provide a more recent account. Most of the available fault detection algorithms involve comparing the observed behavior of the process to the corresponding output of a reference model which may be mechanistic, empirical or semi empirical (Venkatasubramanian et al., 2003a, Venkatasubramanian et al., 2003b and Venkatasubramanian et al., 2003c). If the fault is observable, the fault detection scheme will generate fault symptom patterns which in turn are fed to the fault diagnosis scheme to determine the root cause of the observed abnormal behavior. A fault diagnostic system is composed of a detection algorithm followed by a diagnosis scheme. An observable fault is defined as one that can be detected or observed from the chosen set of measurement variables in spite of the background noise. Lack of observability will result in a suboptimal operation due to the presence of an undetected fault. When data is collected from a process while a fault is occurring, the application of a given statistical model to these data, either univariate or multivariate, is supposed to indicate the presence of the fault. If the statistical model fails to provide indication of the fault this may signify that the specific fault cannot be observed with that particular model. The most common reasons for this lack of observability are as follows: (a) the measured process variables exhibit low signal to noise ratios and (b) the measured variables do not contain sufficient information regarding this fault and more representative variable(s) should be used for detection (Raghuraj et al., 1999 and Kourti, 2002). The latter reason is especially important when those variables used for detection are tightly controlled to satisfy quality requirements resulting in lack of information with respect to the fault detection scheme. Then, in order to detect a fault, it may be required to increase the variability, for example, by detuning the controller, so the fault can be observed. On the other hand, detuning the controller causes deterioration of closed loop performance and possible loss of profit due to higher product variability. Hence, there is a tradeoff between fast fault detection on the one hand and good control on the other. Most of the available fault detection systems, in particular data driven techniques, are implemented as a supplement to the available control system. Despite the significant amount of research in fault detection, the topic of the interaction between control and fault diagnosis has not been extensively studied in particular in the context of fault observability and fault distinguishability. Jacobson and Nett (1991) proposed a four parameter controller setup as a generalization of the two degrees of freedom controllers and Tyler and Morari (1994) reformulated the four degrees of freedom controller into a general framework for which tools from optimal and robust control were applied. The main conclusion of their studies was that when uncertain plants are used in synthesizing a model based controller, the control and diagnostic systems must be synthesized simultaneously. The main drawbacks of these approaches are: (a) they did not use standard fault diagnostic algorithms (e.g. exponential weighted moving average (EWMA), cumulative sum (CUSUM), principal component analysis (PCA), partial least square (PLS), etc.) and (b) they did not address the economic impact of unobservable faults. The focus of this work is to investigate the simultaneous design of controller and fault diagnosis scheme to enhance fault observability while mitigating through control the impact of unobserved faults. This work addresses these topics as follows: 1. A tabular CUSUM and T2-PCA based algorithms are used for detection for univariate and multivariate cases, respectively. Under low signal to noise ratio, it is shown that these algorithms require a certain period of time to detect certain classes of faults. Accordingly, the observability of the fault is related to its duration or alternatively to its frequency. 2. The tuning parameters of the closed loop controller are optimized to achieve an optimal tradeoff between economic losses that may result when high frequency faults (relative to a statistical monitoring chart) and closed-loop variability are experienced. The paper is organized as follows. In Section 2, definitions and theoretical background are presented. The details of the algorithm and the models are given in Section 3. To illustrate the methodology, a simulation example based on an endothermic continuous stirred tank reactor is presented in Section 4. Analysis and discussion of the result are presented in Section 5 followed by conclusions.

نتیجه گیری انگلیسی

In the present work, a methodology has been developed to quantify the cost associated with faults with different degrees of observability. The main objective was to consider the economical consequences associated with theses faults. The proposed methodology minimizes, over the frequency domain, the cost associated with the quality characteristic variable(s), operating cost and the cost associated with the control changes while adjusting the controller tuning parameters and the parameters of the fault detection algorithm. Observability of the fault has been gauged using the concept of ARLout of control and incorporated within the proposed framework. The method has been tested using an endothermic continuous stirred tank reactor (CSTR). Two faults have been considered, a low frequency square wave in the inlet concentration and valve stiction. Tabular CUSUM and multivariate PCA were used for detection. Each one of these methods was found to perform better for a particular type of fault. For example, the CUSUM was found suitable for detecting faults that consist on changes in mean whereas multivariate PCA is more suitable for detecting changes in both mean and variance. The results have been validated in the time domain to test the suitability for using Parseval's theorem in quantifying the chosen variabilities.