دانلود مقاله ISI انگلیسی شماره 25614
ترجمه فارسی عنوان مقاله

کنترل حداقل- حداکثر با استفاده از برنامه ریزی پویا تقریبی پارامتری

عنوان انگلیسی
Min–max control using parametric approximate dynamic programming
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
25614 2010 8 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Control Engineering Practice, Volume 18, Issue 2, February 2010, Pages 190–197

ترجمه کلمات کلیدی
بدترین فرمول - سیستم های خطی نامشخص - کنترل مقاوم - کنترل بهینه -
کلمات کلیدی انگلیسی
Worst-case formulation, Uncertain linear systems, Robust control, Optimal control,
پیش نمایش مقاله
پیش نمایش مقاله  کنترل حداقل- حداکثر با استفاده از برنامه ریزی پویا تقریبی پارامتری

چکیده انگلیسی

This study presents a computationally efficient approximate dynamic programming approach to control uncertain linear systems based on a min–max control formulation. The optimal cost-to-go function, which prescribes an optimal control policy, is estimated using piecewise parametric quadratic approximation. The approach requires simulation or operational data only at the bounds of additive disturbances or polyhedral uncertain parameters. This strategy significantly reduces the computational burden associated with dynamic programming and is not limited to a particular form of performance criterion as in previous approaches.

مقدمه انگلیسی

Classical model predictive control (MPC) is susceptible to three main difficulties: obtaining an accurate model, ensuring robustness/stability with respect to uncertainties, and solving a complex online optimization problem such as solving a nonlinear program (Morari & Lee, 1999). In particular, robust stability is a major concern in industrial MPC applications and is mostly addressed through the use of extensive closed-loop simulation prior to implementation (Qin & Badgwell, 2003). This method is expensive and time-consuming because it requires simulation test for all possible combination of important dynamics based on the control engineer's knowledge on the process (Badgwell, 1997). Robust MPC, on the other hand, is an emerging alternative technique which does not require an accurate deterministic model. The underlying concept is to construct a linear model with uncertain parameters or additive stochastic disturbances for description of all possible processes and to utilize the uncertainty information within the receding horizon optimization framework. Robust MPC minimizes the worst-case performance (i.e., the maximum cost-to-go) based on possible parameter values or stochastic disturbances within their deterministic bounds while respecting constraints for all possible scenarios (Campo and Morari, 1987 and Witsenhausen, 1968). The key advantage of robust MPC over classical MPC is thus its indifference to the accuracy of the model in presence of uncertainties. Lee and Yu (1997) summarizes well two general min–max formulations for solving the robust control problem. One is open-loop formulation where the uncertainty and feedback in the future time steps are ignored. The second is min–max formulation from the viewpoint of closed-loop control where a dynamic program is solved. The open-loop formulation is the essence of most min–max MPC techniques, but it may lead to infeasibility, conservative closed-loop performance, and instability. On the other hand, the closed-loop formulation provides less conservative solution and robust stability under infinite prediction horizon setup (Bemporad et al., 2003, Lee and Yu, 1997 and Scokaert and Mayne, 1998). Nevertheless, the excessive computational burden associated with solving DP limits its applications to small-size systems (Bertsekas, 2005). This class of multi-stage min–max optimization problems have been addressed by different approaches. A scenario tree formulation for linear systems with additive disturbances is suggested to treat a single optimization problem for one initial state only (Scokaert & Mayne, 1998). Linear matrix inequality (LMI) techniques are employed to efficiently compute the worst-case performance (Kothare et al., 1996 and Wan and Kothare, 2003). These formulations compute the control laws based on the upper bound of a Lyapunov function in the worst case for the stability of feedback control policy. Hence, the resulting control law can be very conservative. An approximate solution to min–max MPC is also suggested to solve the open-loop min–max formulation efficiently using quadratic programming derived from the upper bound of the worst case cost. This approach is validated through its application to an open-loop stable, pilot-scale exothermic reactor (Gruber, Ramirez, Alamo, Bordons, & Camacho, 2009). Most of the computational burden in solving DP lies in the off-line procedure where the optimal cost-to-go function is calculated. The resulting cost-to-go function can be used to find an optimal policy mapping a state to a control action, which provides much simpler online implementation than a family of receding horizon techniques. Motivated by this, multi-parametric programming (Borrelli et al., 2003 and Tødel et al., 2003) and DP-based approaches (Börnberg and Diehl, 2006 and Diehl and Börnberg, 2004) have been proposed. These approaches, however, are limited to a particular form of performance criterion, e.g., linear cost term for using linear programming. Recent advancements in the field of approximate dynamic programming (ADP) showed potential success of solving closed-loop formulation via ADP (Lee & Lee, 2006). ADP attempts to circumvent the off-line computational burden, referred to as the curse of dimensionality, by approximately computing the optimal cost-to-go values for potentially important states only. Among the works on ADP is the use of instance-based approximation (Lee, Kaisare, & Lee, 2006), linear programming (de Farias & van Roy, 2003), and polyhedral approximation (Börnberg & Diehl, 2006). Two important preconditions apply to the development of an effective approximation: the choice of approximation that closely approximates the desired cost-to-go function and the efficiency of the update algorithm (de Farias & van Roy, 2003). This paper presents a tailored ADP approach for solving min–max control of linear systems with bounded parameters or additive disturbances. The approach is based on simulation at the bounds of uncertain parameters and piecewise quadratic parametric approximation with online gradient-descent update. The key advantages of this method are its accommodation for a general class of high dimensional linear systems, as well as the need to only consider a simplified description of uncertainties. Three numerical examples including a high-purity distillation column control are provided to illustrate the efficacy of the proposed method.

نتیجه گیری انگلیسی

This work presented a new approach for robust approximate dynamic programming based on min–max closed-loop formulation. The proposed approach uses piecewise parametric approximation as well as online updates which is suitable for systems with continuous state space and potentially for high dimensional systems. The method exploits the fact that the worst-case scenario occurs at bounds of the uncertainties for a linear system. The result is significant decrease in the computational burden off-line as well as reduction in the computational load online to a single stage optimal control problem. The case studies show convergence and significant improvement for simple constrained uncertain linear systems. One issue which may affect optimality is discontinuities at boundaries of the clusters (see Fig. 4). This may lead to sub-optimality of converged cost function. Though it may not necessarily represent the true optimal cost-to-go function without exhaustive sampling in the entire state space, the proposed approach provides stable learning via local quadratic approximation and takes advantage of the principle of optimality to provide improved robustness and performance from any existing initial policies.