بهینه سازی زیر مجموعه ای تصادفی برای بهینه سازی قابلیت اطمینان و تجزیه و تحلیل حساسیت در طراحی سیستم
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
26037 | 2009 | 14 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Computers & Structures, Volume 87, Issues 5–6, March 2009, Pages 318–331
چکیده انگلیسی
Design problems that involve the system reliability as the objective function are discussed. In order to appropriately address the challenges of such applications when complex system models are involved, stochastic simulation is selected to evaluate the probability of failure. An innovative algorithm, called Stochastic Subset Optimization (SSO), is discussed for performing the reliability optimization as well as an efficient sensitivity analysis. The basic principle in SSO is the formulation of an augmented problem where the design variables are artificially considered as uncertain. Stochastic simulation techniques are implemented in order to simulate samples of these variables that lead to system failure. The information that these samples provide is then exploited in an iterative approach in SSO to identify a smaller subset of the design space that consists of near-optimal design variables and also that has high plausibility of containing the optimal design. At the same time, a sensitivity analysis for the influence of both the design variables and the uncertain model parameters is established.
مقدمه انگلیسی
In engineering design, the knowledge about a planned system is never complete. For an effective design, all uncertainties associated with future excitation events and modeling of the system should be explicitly accounted for. A probability logic approach [1] provides a rational and consistent framework for quantifying both excitation and system modeling uncertainties, and it leads to a robust stochastic system design framework [2]. In this setting, reliability-based design optimization (RBDO), i.e. design considering reliability measures in the objective function or the design constraints, has emerged as one of the standard tools for robust and cost-effective design of engineering systems (e.g. [3], [4], [5], [6] and [7]). The concept of robust reliability [8] is used to include model uncertainty when quantifying the stochastic performance of the engineering system under design. This performance is therefore characterized by the robust probability of failure which provides a measure of the plausibility of the occurrence of unacceptable behavior of the system (“failure”), based on the available information. To formalize these ideas, consider a system that involves some controllable parameters that define its design, referred to as design variables , φ=[φ1φ2⋯φnφ]∈Φ⊂Rnφφ=[φ1φ2⋯φnφ]∈Φ⊂Rnφ, where Φ denotes the bounded admissible design space with volume VΦ . Also consider a model class that is chosen to represent a system design and its future excitation, where each model in the class is specified by an nθ -dimensional vector θ=[θ1θ2⋯θnθ]θ=[θ1θ2⋯θnθ] lying in Θ⊂RnφΘ⊂Rnφ, the set of possible values for the model parameters. Vector θ consists of the parameters for the models of both the system, θs, and the excitation, θq. Α PDF (probability density function) p(θ|φ), which incorporates available knowledge about the system, is assigned to these parameters. This PDF is interpreted in probability logic as a measure of the plausibility of each of the possible values of θ based on the available information [1]. Νon-parametric modeling uncertainty may be addressed by introducing a model prediction error, i.e. an error between the response of the actual system and the response of the model adopted for it. This prediction error may be modeled probabilistically [9] by using the principle of maximum information entropy [1], then the uncertain parameters in its probability model can be augmented into θ to form an uncertain parameter vector composed of both the system and excitation model parameters as well as the model prediction-error parameters. The robust failure probability for a given choice for the design variables is then [8]: equation(1) PF(φ)≜P(F|φ)=∫ΘP(F|φ,θ)p(θ|φ)dθ=∫ΘIF(φ,θ)p(θ|φ)dθPF(φ)≜P(F|φ)=∫ΘP(F|φ,θ)p(θ|φ)dθ=∫ΘIF(φ,θ)p(θ|φ)dθ Turn MathJax on where ΙF(φ,θ) is the indicator function for failure F, which equals one if the system that corresponds to (φ,θ) fails and zero if it does not. Although RBDO problems are most often formulated by adopting deterministic objective functions and reliability constraints (e.g. [10] and [11]), they can also be defined with the system reliability as an objective function (e.g. [12]). In this study we will discuss problems of the latter type. Thus, we are interested in the optimization problem: equation(2) View the MathML sourceminPF(φ)givenfc(φ)≥0 Turn MathJax on where fc(φ) is a vector of deterministic constraints; for example, constraints related to structural cost. An equivalent formulation of this problem is equation(3) View the MathML sourceφ∗=argminφ∈ΦPF(φ) Turn MathJax on where the constraints are taken into account by appropriate definition of the admissible design space Φ. A major challenge in performing the optimization in (3) is that the objective function in (1) can rarely be evaluated analytically or even approximated efficiently by direct numerical quadrature if the dimension of θ is more than 3. Therefore, many specialized approaches have been proposed for approximating the failure probability in reliability optimizations. These approaches include, for example, use of some proxy for the failure probability, e.g. reliability index obtained through first-order or second-order analysis (e.g. [4]), and response surface approximations to the limit state function defining the model’s failure (e.g. [7]). These approximations may work satisfactorily under certain conditions, but they are not guaranteed to converge to the solution of the original optimization problem. Additionally, these specialized approaches might impose restrictions on the degree of complexity of the models that can be considered. An alternative design methodology that is appropriate for applications that involve non-linear models (since no constraints on the model complexity exist) and/or large number of uncertain model parameters (exploiting recent algorithmic developments [13] and [14]), is to evaluate the probability of failure through stochastic simulation. The basic idea is to estimate the objective function in (1) by using a finite number N of random samples of θ drawn from p(θ|φ): equation(4) View the MathML sourcePˆF(φ,ΩN)=1N∑i=1NIF(φ,θi) Turn MathJax on where ΩN = {θ1, … , θΝ} is the sample set of the parameters with vector θi denoting the ith sample. The estimate of PF(φ) in (4) involves an error eN(ΩΝ, φ), so (3) is approximately transformed to the stochastic optimization problem: equation(5) View the MathML sourceφN∗=argminφ∈ΦPF(φ,ΩN) Turn MathJax on If the stochastic simulation procedure is a consistent one, then as N → ∞, View the MathML sourcePˆF(φ,ΩN)→PF(φ) and View the MathML sourceφN∗→φ∗ [15]. The existence of the estimation error eN(ΩΝ, φ), which may be considered as noise in the objective function, contrasts with classical deterministic optimization where it is assumed that one has perfect information. Another source of difficulty is the high computational cost associated with the estimation in (4). Even though relatively efficient stochastic simulation algorithms have been recently developed for calculation of failure probabilities for complex systems [13] and [14], each evaluation still requires a substantial computational effort, particularly for dynamic reliability problems. Finally, for complex problems it is difficult, or impractical, to develop an analytical relationship between the design variables and the objective function. Though advanced automatic differentiation tools have been developed recently for sensitivity analysis in reliability optimizations [16], for some type of applications, for example, when the system simulation is a “black box” for the designer, it might be the case that numerical differentiation is the only possibility for obtaining information about the gradient vector [15]. This further increases the complexity of these optimization problems. Many numerical techniques and optimization algorithms have been developed to address such challenges in stochastic optimization problems (e.g. [15], [17] and [18]). Such approaches may involve one or more of the following strategies: (i) use of common random numbers, i.e. using the same sample sets View the MathML sourceΩN2=ΩN1, to reduce the relative importance of the estimation error when comparing two design choices that are “close” in the design space, (ii) application of exterior sampling techniques which adopt the same stream of random numbers throughout all iterations in the optimization process, thus transforming the stochastic problem (5) into a deterministic one, (iii) simultaneous perturbation stochastic search techniques, which approximate at each iteration the gradient vector by performing only two evaluations of the objective function in a random search direction, and (iv) gradient-free algorithms (such as evolutionary algorithms, or objective function approximation methods) which do not require derivative information. Taflanidis and Beck [2] provide a detailed discussion about algorithms appropriate for stochastic optimization problems like (5). All of these algorithms involve, though, a significant computational cost, especially for design problems for which little information is available a priori about the sensitivity of the objective function to the design variables; such information, if available, is valuable for selecting the parameters (fine-tuning) of the chosen optimization algorithms. A novel algorithm, called Stochastic Subset Optimization (SSO), is discussed in this paper for an efficient approach to optimization problems involving continuous design variables and the system reliability as the objective function, as in (3). Contrary to other simulation-based stochastic optimization algorithms, SSO does not evaluate the objective function View the MathML sourcePˆF(φ) for specific values of the design variable but rather it performs a global sensitivity analysis for them. This is established by formulating an augmented stochastic problem where the design variables are artificially considered as uncertain. Samples of the pair [φ,θ] are then simulated that lead to system failure. Exploiting the information in these samples, a subset of the design space Φ is identified that has the highest likelihood, within a class of admissible subsets, of containing the optimal design variables. To successively reduce the size of these subsets, an adaptive iterative approach is used. Recently-developed advanced stochastic simulation techniques are implemented in the iterative approach to increase the efficiency of the sampling process. Finally, SSO efficiently converges to a smaller subset of near-optimal design variables while it simultaneously explores the sensitivity of P(F|φ) to φ and allows a similar sensitivity analysis of the system performance to the uncertain model parameters θ. The SSO algorithm was initially proposed in [19] for optimal reliability problems and was later extended in [2] to general robust stochastic design applications that involve any utility or loss function as the objective function to quantify the performance of the system. The current work reviews SSO for optimal reliability design and then focuses on computational aspects involved in: (i) the identification of the optimal subset within some class of admissible subsets in the design space, and (ii) efficient sampling of the design variables when the admissible subsets correspond to hyper-ellipses. Additionally, this paper discusses in detail how SSO can be used to perform sensitivity analyses with respect to the uncertain parameters.
نتیجه گیری انگلیسی
Reliability-based design optimization problems that involve the system reliability as objective function were discussed. Stochastic simulation was considered for evaluation of the failure probability. This approach is appropriate for complex design applications that might involve non-linear models, complex failure modes and large number of uncertain model parameters. It involves though a significant computational cost for performing the associated design optimization. An innovative approach, called Stochastic Subset Optimization (SSO), was discussed for efficiently exploring the sensitivity of the failure probability to the design variables and the model parameters and for efficiently identifying a subset of near-optimal design variables in the design space. The basic principle in SSO is the formulation of an augmented problem where the design variables are artificially considered as uncertain. Samples of the model parameters and the design variables that lead to system failure are them simulated. Using the information contained in these samples, an iterative approach was described for adaptively identifying a subset of the original design space that at convergence is characterized by small sensitivity with respect to all design variables and has the highest likelihood (within some class of admissible subsets) of including the optimal design. Topics related to statistical properties of the iterative approach, advanced stochastic simulation techniques for efficiently obtaining the desired samples and computational techniques for identifying the optimal subsets were addressed. An important topic than influences the efficiency of SSO is the selection of the geometrical shape of the admissible subsets. The particular choice should allow for efficient description of the correlation of the design variables with respect to the system reliability (i.e. it should fit the contours of the objective function near the optimal solution), but at the same time it should allow for a reliable solution to the optimization sub-problem of identifying the optimal subsets within the class of admissible subsets. The latter means that shapes that are too complex should be avoided because the associated sub-problem becomes too challenging and its solution less robust. Hyper-ellipses were suggested for this purpose in this study. The efficiency of the sensitivity analysis of SSO was then discussed. If high accuracy for the optimal design is required, SSO can be followed by some other stochastic optimization algorithm. Such a framework for efficient optimization of reliability was briefly outlined using information available from SSO. An example was presented that showed the efficiency of the optimization and the sensitivity analysis. This example discussed the design of a base-isolation system for a three-story structure. The optimization of the reliability of the system considering future near-fault ground motions was adopted as the design objective. A realistic stochastic model was described for representing such ground motions. The structural performance was evaluated by nonlinear simulation that incorporates all important characteristics of the behavior of the structural system and all available information about the structural model and expected future earthquakes. It was shown in the context of this example that SSO can efficiently identify a set of near-optimal designs that contains the optimal design variables. Additionally, SSO was demonstrated to be able to describe the correlation, or interaction, between the design variables in terms of the contours for the objective function. Selection of the geometrical shape of the admissible subsets as hyper-ellipses was shown to be superior to hyper-rectangles in terms of establishing better quality for the identification in SSO. The sensitivity analysis established for the model parameters provided important insight into their influence on the system performance. The stochastic excitation characteristics, in particular, the moment magnitude, epicentral distance and peak ground velocity, were shown to have the greater importance for the failure of the base-isolated structure. This was especially true as SSO converged to robust design configurations. All this information is extremely useful, if used appropriately, in the RBDO setting.