اساس الگوی کنترل یادگیری تکراری با معیار درجه دوم برای سیستم های خطی متغیر با زمان
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|26917||2000||17 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Automatica, Volume 36, Issue 5, May 2000, Pages 641–657
In this paper, iterative learning control (ILC) based on a quadratic performance criterion is revisited and generalized for time-varying linear constrained systems with deterministic, stochastic disturbances and noises. The main intended area of application for this generalized method is chemical process control, where excessive input movements are undesirable and many process variables are subject to hard constraints. It is shown that, within the framework of the quadratic-criterion-based ILC (Q-ILC), various practical issues such as constraints, disturbances, measurement noises, and model errors can be considered in a rigorous and systematic manner. Algorithms for the deterministic case, the stochastic case, and the case with bounded parameter uncertainties are developed and relevant properties such as the asymptotic convergence are established under some mild assumptions. Numerical examples are provided to demonstrate the performance of the proposed algorithms.
Iterative learning control (ILC) was originally proposed in the robotics community (Arimoto, Kawamura & Miyazaki, 1984) as an intelligent teaching mechanism for robot manipulators. The basic idea of ILC is to improve the control signal for the present operation cycle by feeding back the control error in the previous cycle. Even though the mainstream ILC research has thus far been carried out with mechanical systems in mind, chemical and other manufacturing processes could also benefit significantly from it. Batch chemical processes such as the batch reactor, batch distillation, and heat treatment processes for metallic or ceramic products are good examples. Traditionally, operations of these processes have relied exclusively on PID feedback and logic-based controllers. Refinement of input bias signals based on the general concept of ILC can potentially enhance the performance of tracking control systems significantly. Diversification of ILC applications to the above-mentioned problems is already starting to take place, as evidenced by the comprehensive lists of recent ILC papers compiled by Chen (1998). The classical formulation of ILC design problem has been as follows: Find an update mechanism for the input trajectory of a new cycle based on the information from previous cycles so that the output trajectory converges asymptotically to the desired reference trajectory. The first-order ILC algorithms update the input trajectory (defined over the same time interval) in the following way (Moore, 1993): equation(1) In the above, where and denote the output and output reference trajectories which can be either continuous or discrete signals defined over a finite time interval of [0,T]. The subscript k here represents the batch/cycle index. In the above, called “learning filter” is an operator that maps the error signal to the input update signal . Within this somewhat restrictive problem setup, the ILC design is reduced to choosing the operator . The prevalent approach thus far has been to assume a simple structure for the learning filter H and tune the parameters to achieve the desired learning properties. Examples of this type include D-type ( Arimoto et al., 1984), PID-type ( Bondi, Casalino & Gambradella, 1988), and their variants. As a straightforward extension of the first-order algorithms, higher-order algorithms have been proposed too ( Bien & Huh, 1989). This line of approaches, however, could yield only limited results for general multivariable systems. Model-based algorithms have also been proposed. However, most algorithms proposed were based on the notion of direct model inversion (Togai & Yamano, 1985; Oh, Bien & Suh, 1988; Lucibello, 1992; Moore, 1993; Lee, Bang & Chang, 1994b; Yamada, Watanabe, Tsuchiya & Kaneko, 1994), that is, where represents the input–output map of the process. Since would contain a differentiator(s) (in the continuous-time case), the learning filter based on the model inverse becomes hyper-sensitive to high-frequency components in . Since, in most process control applications, smooth manipulation of actuators is at least as important as precise control of outputs, these approaches cannot be used directly. Furthermore, the zero tracking error objective cannot be satisfied for general nonsquare MIMO processes. Since it is not uncommon for industrial batch processes to render a nonsquare problem for which zero tracking error for all the output variables is impossible, a more general objective appropriate for nonsquare processes is needed. There are certain additional traits and requirements found in prototypical process control problems that motivate a more general (but perhaps more computationally intensive) approach. First, most process variables are subject to certain constraints that are set by physical or safety considerations. Hence, it is desirable to have algorithms that incorporate the constraint information explicitly into the calculation. Second, dynamics of almost all chemical processes are intrinsically nonlinear, and the nonlinearities become exposed when the processes are operated over a wide range of conditions, as in typical industrial batch operations. For this reason, it is necessary to derive ILC algorithms that can accommodate nonlinear system models, when available. Third, disturbances and noises are integral aspects of most process control problems and must be dealt with in a systematic fashion. Some disturbances, once they occur, tend to repeat themselves in subsequent batches, while others tend to be more specific to a particular batch. Most disturbances exhibit significant time correlation that must be exploited for efficient rejection. Finally, chemical processes have a quite long interval allowed between two adjacent batches and sample times can be chosen relatively large in relation to the total cycle time. These traits should allow us to implement numerically more intensive algorithms, such as those based on mathematical programming techniques. Some of the aforementioned generalizations have already appeared in the literature. For example, to accommodate the nonsquare MIMO systems, the zero-tracking error requirement has been relaxed to “minimum possible error in the least-squares sense”. This type of approach has been studied by Togai and Yamano (1985) and also by Moore (1993). For the purpose of reducing the noise sensitivity, Tao, Kosut and Aral (1994) proposed a discrete-time ILC algorithm based on the following least-squares objective with an input penalty term: equation(2) A similar objective has also been considered by Sogo and Adachi (1994) but in the continuous-time domain. These algorithms can accommodate nonsquare MIMO systems and mitigate the noise sensitivity by using the input penalty term. However, by adding the quadratic penalty term on the inputs directly, offsets result, i.e., the algorithms fail to attain the minimum achievable error in the limit. In addition, it is unclear how to best trade off the noise sensitivity against the speed of convergence and output offset, using the input weight matrix. Recently, Amann, Owens and Rogers (1996) and Lee, Kim and Lee (1996) have independently proposed to use the following objective: equation(3) Because the input change is penalized instead of the input, the algorithm has an integral action (with respect to the batch index) and achieves the minimum achievable error in the limit. In the unconstrained, deterministic setting, Amann et al. (1996) derived a noncausal input updating law equation(4) from , while Lee et al. (1996) obtained equation(5) which is indeed a rephrasing of (4) in a pure learning form. Amann et al. (1996) transformed (4) to a causal form by borrowing the idea from the solution of the finite-time quadratic optimal tracking problem. The resulting algorithm is a combination of a state feedback law and a feedforward signal based on the error signal of the previous cycle. In addition to significant reduction in the computational load, the feedback implementation gives some robustness to disturbances and model errors. However, their algorithm is developed entirely in a deterministic setting (without direct references to disturbances) and hence deserves further investigation. In a similar spirit, Lee and Lee (1997) also showed that the Q-ILC algorithm can be implemented as an output feedback algorithm, thus improving the robustness. Their real-time algorithm can be viewed as a combination of the popular model predictive control ( Lee, Morari & Garcia, 1994a) and the iterative learning control. The objective of this paper is to provide a more general and comprehensive framework for quadratic-criterion-based ILCs that is capable of addressing all the issues that were mentioned to be important for process control applications. We focus on the iterative learning control implementation rather than the feedback implementation, keeping in mind the fact that the conversion to the latter type of implementation can always be done in a straightforward manner. We first introduce an error transition model that represents the transition of tracking error trajectories between two adjacent batches. We also discuss how the effects of disturbances of various types can be integrated into the transition model. Based on this model, one-batch-ahead quadratic optimal control algorithms are derived for both the unconstrained and constrained cases. In addition, a robust ILC algorithm that minimizes the worst-case tracking error for the next batch is proposed. For each algorithm, relevant mathematical properties such as the convergence, robustness, and noise sensitivity are investigated. The rest of the paper is organized as follows: In Section 2, the static gain representation of the dynamic system is introduced and is converted into an error transition model. The ILC design objective is defined based on the model description. In Section 3, the quadratic-criterion-based iterative learning control (Q-ILC) algorithm is derived for the unconstrained case, and the analysis of the relevant properties such as the convergence, noise sensitivity, and robustness follows. A real-time output feedback implementation of the algorithm is discussed and a comprehensive comparison with Amann et al.'s state-feedback/error-feedforward algorithm is made. The constrained Q-ILC algorithm is presented with the convergence proof in Section 4. In Section 5, the robust Q-ILC algorithm is proposed with the convergence proof. Numerical examples are given in Section 6 and conclusions are drawn in Section 7.
نتیجه گیری انگلیسی
In this paper, it was argued that the existing ILC algorithms, despite their successes in controlling mechanical systems, are not well-suited for process control applications. Motivated by this, we presented new model-based iterative learning control algorithms that were tailored specifically for this type of application. The algorithms were based on quadratic performance criteria and were designed to consider the issues relevant to process control, such as disturbances, noises, nonlinearities, constraints, and model errors. We proved the convergence of the error signal under the proposed algorithms. Investigation of other relevant properties such as the noise sensitivity and robustness along with some numerical studies indicated that these algorithms should perform as intended but at the expense of added computational requirements.