دانلود مقاله ISI انگلیسی شماره 27221
ترجمه فارسی عنوان مقاله

نمونه برداری مهم بر اساس متا مدل برای تجزیه و تحلیل حساسیت قابلیت اطمینان

عنوان انگلیسی
Meta-model-based importance sampling for reliability sensitivity analysis
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
27221 2014 10 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Structural Safety, Volume 49, July 2014, Pages 27–36

ترجمه کلمات کلیدی
داده ها - قابلیت اطمینان سازه - نمونه برداری مهم بر اساس متا مدل - تجزیه و تحلیل حساسیت -
کلمات کلیدی انگلیسی
Kriging, Structural reliability, Meta-model-based importance sampling, Sensitivity analysis,
پیش نمایش مقاله
پیش نمایش مقاله   نمونه برداری مهم بر اساس متا مدل برای تجزیه و تحلیل حساسیت قابلیت اطمینان

چکیده انگلیسی

Reliability sensitivity analysis aims at studying the influence of the parameters in the probabilistic model onto the probability of failure of a given system. Such an influence may either be quantified on a given range of values of the parameters of interest using a parametric analysis, or only locally by means of its partial derivatives. This paper is concerned with the latter approach when the limit-state function involves the output of an expensive-to-evaluate computational model. In order to reduce the computational cost it is proposed to compute the failure probability by means of the recently proposed meta-model-based importance sampling method. This method resorts to the adaptive construction of a Kriging meta-model which emulates the limit-state function. Then, instead of using this meta-model as a surrogate for computing the probability of failure, its probabilistic nature is used in order to build an quasi-optimal instrumental density function for accurately computing the actual failure probability through importance sampling. The proposed estimator of the failure probability recasts as a product of two terms. The augmented failure probability is estimated using the emulator only, while the correction factor is estimated using both the actual limit-state function and its emulator in order to quantify the substitution error. This estimator is then differentiated by means of the score function approach which enables the estimation of the gradient of the failure probability without any additional call to the limit-state function (nor its Kriging emulator). The approach is validated on three structural reliability examples.

مقدمه انگلیسی

Modern engineering has to cope with uncertainty at the various stages of the design, manufacturing and operating of systems and structures. Such uncertainty either arise from the observed scattering of the environmental conditions in which products and structures will evolve, or from a lack of knowledge that results in the formulation of hopefully conservative assumptions. No matter their source, aleatory (observed) and epistemic (reducible) uncertainties can be dealt with in the unified framework of probabilistic methods for uncertainty quantification and risk-based engineering. In particular, reliability analysis is the discipline which aims at quantifying the level of safety of a system in terms of a probability of failure. From now on, it is assumed that the uncertain parameters of the problem at hand are modeled by a random vector X whose joint probability distribution is explicitly known and dependent on a certain number of design parameters grouped in the vector d. In practice these design parameters are considered as mean values or, more generally, characteristic values of the random variables gathered in X. This assumption corresponds to the common situation where d gathers “ideal” dimensions whereas the randomness in X models the aleatory uncertainty in the manufacturing process due to tolerancing. It is also assumed that there exists a deterministic computational model MM which enables the assessment of the system’s performance through a so-called limit-state function gg. According to this setup, the failure probability is defined by the following integral: equation(1) View the MathML sourcepf(d)=P[g(X,M(X))⩽0|d]=∫FfX(x|d)dx, Turn MathJax on where View the MathML sourceF={x∈X:g(x,M(x))⩽0} is the failure domain and f X is the joint probability density function of the random vector X. The dependence of gg on the output of MM will now be dropped for the sake of clarity in the notation, but it is important to remember that each evaluation of gg implies a run of the possibly expensive-to-evaluate computational model MM. Note that in general the design parameters d could also affect the limit state function itself. This case is not addressed in the present paper though. We assume here that parameters d only affect X’s distribution as in [1], [2], [3] and [4] because it considerably eases the reliability sensitivity analysis through the use of the importance sampling trick initially proposed by Rubinstein [5]. Introducing the failure indicator function 1F1F, the failure probability easily rewrites as its mathematical expectation: equation(2) View the MathML sourcepf(d)=∫X1F(x)fX(x|d)dx=EX[1F(X)]. Turn MathJax on This enables the evaluation of the failure probability by the so-called Monte Carlo estimator [1]: equation(3) View the MathML sourcep^fMCS=1N∑i=1N1F(X(i)), Turn MathJax on where {X(i), i = 1, … , N} is a sample of N independent copies of the random vector X. Due to the central limit theorem, this estimator is unbiased and convergent and its coefficient of variation is defined as follows (provided pf ≠ 0): equation(4) View the MathML sourceδMCS=1-pfNpf. Turn MathJax on From this expression, it appears that the lower the probability p f, the greater the number N of evaluations of 1F1F (hence, the number of runs of MM). As an order of magnitude, one should expect a minimum sample size of N = 10k+2 for estimating a failure probability of 10−k with a 10% coefficient of variation. This clearly becomes intractable for expensive-to-evaluate failure indicator functions and low failure probabilities, which are both the trademark of engineered systems. Nowadays there exists a number of techniques to evaluate the failure probability at a far reduced computational cost. On the one hand, variance reduction techniques [1] aim at rewriting Eq. (2) in order to derive new Monte-Carlo-sampling-based estimators that feature a lower coefficient of variation than the one given in Eq. (4). Importance sampling [1], directional sampling [6], line sampling [7] and [8] and subset simulation [9] all enable a great reduction of the computational cost compared to crude Monte Carlo sampling. Importance sampling and subset simulation are certainly the most widely applicable techniques because they are not based on any geometrical assumptions about the topology of the failure domain FF. Nonetheless, importance sampling is only a concept and still requires the choice for an instrumental density function which is not trivial and strongly influences both the accuracy and the computational cost. On the other hand, approximation techniques make use of meta-models that imitates the limit-state function (or at least the limit-state surface {x∈X:g(x)=0}{x∈X:g(x)=0}) in order to reduce the computational cost. These meta-models are built from a so-called design of experiments {x(i), i = 1, … , m } whose size m does not depend on the order of magnitude of the failure probability but rather on the nonlinearity of the performance function gg and the dimension n of the input space X⊆RnX⊆Rn. For instance, quadratic response surfaces [10], artificial neural networks [11], support vector machines [12], Kriging surrogates [13] and polynomial (resp. sparse polynomial) chaos expansions [14], [15] and [16] have been used for surrogate-based reliability analysis (see [17, for a review]). The most efficient variance reduction techniques (namely subset simulation) still require rather large sample sizes that can potentially be reduced when some knowledge about the shape of failure domain exists. Despite the increasing accuracy of meta-models, surrogate-based (also called plug-in) approaches that consists in using emulators instead of the actual limit-state functions lacks an error measure (alike the historical first- and second-order reliability methods (FORM/SORM)). Starting from these two premises, a novel hybrid technique named meta-model-based importance sampling was proposed by Dubourg et al. [18] and [19]. This technique makes use of Kriging predictors in order to approximate the optimal instrumental density function in an importance sampling scheme that theoretically reduces the estimation variance to zero. This paper is not only concerned with the evaluation of the failure probability in Eq. (1) for a single value of the parameters d, but also with the analysis of its sensitivity with respect to the latter vector. Within the structural reliability community, this type of analysis is referred to as reliability sensitivity analysis [20] and [21]. It provides an important insight on system failure for risk-based decision making (e.g. robust control, design or reliability-based design optimization). However, it was previously recalled that the accurate estimation of a single value of the failure probability was already computationally costly. Hence, assessing the failure probability sensitivity by means of repeated reliability analyses is absolutely not affordable. Starting from this premise, Au [22] proposed to consider the parameters d as artificially uncertain, and then use a conditional sampling technique in order to assess reliability sensitivity within a single simulation. The authors conclude that their approach reveals efficient up to 2–3 parameters in d. Based on a similar idea, Taanidis and Beck [23] developed an algorithm which enables the identification of a reduced set of the parameters d that minimizes the failure probability. It is applied to the robust control of the dynamic behavior of structural systems. Here, the objective is to get a more local guess of the influence of d on the failure probability through the calculation of the gradient of the failure probability. This quantity then enables the use of gradient-based nonlinear constrained optimization algorithms for solving the reliability-based design optimization problem [24], [25] and [26]. This topic has already been given a quite significant interest. For instance, Bjerager and Krenk [27] differentiated the Hasofer–Lind reliability index which itself enables the calculation of the gradient of the failure probability. However, FORM may suffer from incorrect assumptions (namely, the linearity of the limit-state surface in the standard space and the uniqueness of the most probable failure point) that are hard to check in practice. Valdebenito and Schuëller [28] propose a parametric approximation of the failure probability which then enables its differentiation. However the accuracy of this approach is conditional on the ability of the proposed parametric model to fit the actual failure probability function. The score function approach that is used here was initially proposed by Rubinstein [5] (see also [1, Chapter 7]). It features the double advantage that (i) it is a simple post-processing of a sampling-based reliability analysis (i.e. it does not require any additional calls to the limit-state function) and (ii) it can be applied to any Monte-Carlo-sampling-based technique (either on the actual limit-state function or on a surrogate). For instance, it has already been applied to importance sampling [1] and subset simulation [3]. The goal of this paper is to show how the score function approach may be coupled with meta-model-based importance sampling for efficient reliability sensitivity analysis. The paper is divided into three sections. First, the basics of meta-model-based importance sampling are briefly summarized. Then, the score function approach is applied to the proposed estimator of the failure probability and the whole approach is eventually tested on three structural reliability examples.

نتیجه گیری انگلیسی

The score function approach proposed by Rubinstein [5] reveals an efficient tool for reliability sensitivity analysis. First, using an importance-sampling-like trick, he showed that the gradient of the failure probability turns out to be the expectation of the gradient of the log-likelihood of the failed samples with respect to the distribution of the random vector X. It means that reliability sensitivities can readily be obtained after a Monte-Carlo-sampling-based reliability at a negligible computational cost (no additional call the limit-state function). Then, Song et al. [3] showed that the score function approach applies to any other sampling-based reliability methods such as subset simulation. In the present work, the authors derived the gradient of their proposed Meta-IS estimator. This enables a great variance reduction in the estimation of both the failure probability and its gradient as illustrated through the examples. These conclusions imply that reliability-based design optimization using Meta-IS is now conceivable. Indeed, as Meta-IS is able to calculate both the failure probability and its gradient, it can be used within an optimization loop for automating the search of the best compromise between cost and failure probability. However, in order to get the best efficiency (minimize the number of calls to the limit-state function), one should first think about an efficient recycling of the design of experiments from one nested reliability analysis to the other (if the change in the design is not too big). Indeed, building the Kriging surrogate from scratch at each iteration of the optimizer would be computationally inefficient [26].