دانلود مقاله ISI انگلیسی شماره 26643
ترجمه فارسی عنوان مقاله

تجزیه و تحلیل حساسیت مبتنی بر شبیه سازی مونت کارلو از مدل یک سیستم منفعل هیدروترمال

عنوان انگلیسی
Monte Carlo simulation-based sensitivity analysis of the model of a thermal–hydraulic passive system
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
26643 2012 17 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Reliability Engineering & System Safety, Volume 107, November 2012, Pages 90–106

ترجمه کلمات کلیدی
سیستم های غیر فعال هسته ای - احتمال شکست تابعی - تجزیه و تحلیل حساسیت قابلیت اطمینان - شبیه سازی زیر مجموعه - نمونه خط - شاخص -
کلمات کلیدی انگلیسی
Nuclear passive system, Functional failure probability, Reliability sensitivity analysis, Subset Simulation, Line Sampling, Sobol indices,
پیش نمایش مقاله
پیش نمایش مقاله  تجزیه و تحلیل حساسیت مبتنی بر شبیه سازی مونت کارلو  از مدل یک سیستم منفعل هیدروترمال

چکیده انگلیسی

Thermal–Hydraulic (T–H) passive safety systems are potentially more reliable than active systems, and for this reason are expected to improve the safety of nuclear power plants. However, uncertainties are present in the operation and modeling of a T–H passive system and the system may find itself unable to accomplish its function. For the analysis of the system functional failures, a mechanistic code is used and the probability of failure is estimated based on a Monte Carlo (MC) sample of code runs which propagate the uncertainties in the model and numerical values of its parameters/variables. Within this framework, sensitivity analysis aims at determining the contribution of the individual uncertain parameters (i.e., the inputs to the mechanistic code) to (i) the uncertainty in the outputs of the T–H model code and (ii) the probability of functional failure of the passive system. The analysis requires multiple (e.g., many hundreds or thousands) evaluations of the code for different combinations of system inputs: this makes the associated computational effort prohibitive in those practical cases in which the computer code requires several hours to run a single simulation. To tackle the computational issue, in this work the use of the Subset Simulation (SS) and Line Sampling (LS) methods is investigated. The methods are tested on two case studies: the first one is based on the well-known Ishigami function [1]; the second one involves the natural convection cooling in a Gas-cooled Fast Reactor (GFR) after a Loss of Coolant Accident (LOCA) [2].

مقدمه انگلیسی

Modern nuclear reactor concepts make use of passive safety features, which do not need external input (especially energy) to operate [3] and, thus, are expected to improve the safety of nuclear power plants because of simplicity and reduction of both human interactions and hardware failures [4], [5] and [6]. However, the aleatory and epistemic uncertainties involved in the operation and modeling of passive systems are usually larger than for active systems [7] and [8]. Due to these uncertainties, the physical phenomena involved in the passive system functioning (e.g., natural circulation) might develop in such a way to lead the system to fail its function (e.g., decay heat removal): actually, deviations in the natural forces and in the conditions of the underlying physical principles from the expected ones can impair the function of the system itself [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20] and [21]. In the analysis of such functional failure behavior [10], the passive system is modeled by a mechanistic Thermal–Hydraulic (T–H) code and the probability of failing to perform the required function is estimated based on a Monte Carlo (MC) sample of code runs which propagate the uncertainties in the model and numerical values of its parameters/variables [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37] and [38]. Within this framework, the objective of sensitivity analysis is twofold: (i) the determination of the contribution of the individual uncertain parameters/variables (i.e., the inputs to the T–H code) to the uncertainty in the outputs of the T–H model code; (ii) the quantification of the importance of the individual uncertain parameters/variables in affecting the performance (i.e., in practice, the functional failure probability) of the passive system [39], [40] and [41]. In this view, the sensitivity analysis outcomes provide two important insights. On the one side, the analyst can identify those parameters/variables that are not important and may be excluded from the modeling and analysis; on the opposite side, the analyst is able to identify those parameters/variables whose epistemic uncertainty plays a major role in determining the functional failure of the T–H passive system: consequently, his/her efforts can be focused on increasing the state-of-knowledge on these important parameters/variables and the related physical phenomena (for example, by the collection of experimental data one may achieve an improvement in the state-of-knowledge on the correlations used to model the heat transfer process in natural convection, and a corresponding reduction in the uncertainty) [30] and [38]. In the present context of passive system functional failure probability assessment the attention will be mainly focused on this latter aspect, i.e., the identification of those uncertain variables playing a key role in the determination of the passive system performance. In all generality, approaches to sensitivity analysis can be either local or global. As the name suggests, local methods consider the variation in the system model output that results from a local perturbation about some nominal input value. In the limit view, the sensitivity measure of the contribution of a generic uncertain input parameter to the uncertainty of the output is the partial derivative of the output with respect to the input parameter itself calculated around the nominal values of the input parameters. Such measure identifies the critical parameters as those whose variation leads to the most variation in the output [39] and [42]. On the contrary, global techniques aim at determining which of the uncertain input parameters influence the output the most when the uncertainty in the input parameters is propagated through the system model [43]. In this view, the term “global” has two meanings: the first one is that, for one input parameter whose uncertainty importance is evaluated, the effect of the entire uncertainty distribution of this parameter is considered; the second one is that the importance of this input parameter should be evaluated with all other input parameters varying as well [44]. Examples of methods for global sensitivity analysis include the so-called variance-based techniques (such as those relying on the computation of Sobol indices [1], [39], [44], [45] and [46] or the Fourier Amplitude Sensitivity Test (FAST) [47]) and the more recent moment independent techniques [43], [48], [49], [50], [51] and [52]. The interested reader may refer to [39], [42], [53], [54], [55], [56], [57] and [58] for detailed and updated surveys on sensitivity analysis methods. Regardless of the technique employed, sensitivity analysis relies on multiple (e.g., many hundreds or thousands) evaluations of the system model (code) for different combinations of system inputs. This makes the associated computational effort very high and at times prohibitive in practical cases in which the computer codes require several hours (or even days) to run a single simulation [32] and [59]. 1 Further, in the present context of nuclear passive systems, the computational issue is even more dramatic because the estimation of the functional failure probability is also of interest besides the sensitivity analysis of the passive system performance: as a consequence, the (typically, hundreds of thousands) simulations performed for estimating the functional failure probability have to be added to those carried out for the sensitivity analysis. In light of the computational problem, the main objective of the present study is to show the possibility of efficiently embedding the sensitivity analysis of the performance of a nuclear passive system within the estimation of its functional failure probability, while resorting to a reasonably limited number of system model code evaluations. To this aim, the use of two advanced Monte Carlo Simulation (MCS) methods, namely Subset Simulation (SS) [60] and [61] and Line Sampling (LS) [62] and [63] is investigated. In the SS approach, the functional failure probability is expressed as a product of conditional probabilities of some chosen intermediate events. Then, the problem of evaluating the probability of functional failure is tackled by performing a sequence of simulations of these intermediate events in their conditional probability spaces; the necessary conditional samples are generated through successive Markov Chain Monte Carlo (MCMC) simulations [64], in a way to gradually populate the intermediate conditional regions until the final functional failure region is reached. Two approaches of literature are here considered for performing the sensitivity analysis of the passive system performance by SS: the first one is local and embraces the so-called concept of reliability sensitivity, in which the sensitivity of the performance of the passive system to a given uncertain input variable is quantified as the partial derivative of the system failure probability with respect to the parameters (e.g., the mean, the variance, etc.) of the probability distribution of the input variable itself [65]; the second one is global and employs the conditional samples generated by MCMC simulation to obtain the entire distribution of the system failure probability conditional on the values of the individual uncertain input parameters/variables [66] and [67]. In the LS method, lines, instead of random points, are used to probe the failure domain of the multi-dimensional problem under analysis. An “important vector” is optimally determined to point towards the failure domain of interest and a number of conditional, one-dimensional problems are solved along such direction, in place of the multi-dimensional problem [62] and [63]. In this approach, the sensitivity of the passive system performance to the uncertain system input parameters/variables can be studied through the examination of the elements of the LS important vector pointing to the failure region: a local informative measure of the relevance of a given uncertain variable in affecting the performance (i.e., in practice, the functional failure probability) of the passive system is the magnitude of the corresponding element in the LS important vector [68–71]. The SS- and LS-based approaches to sensitivity analysis are tested on two case studies: the first one is based on the highly nonlinear and non-monotonous Ishigami function [1] and [39]; the second one involves the natural convection cooling in a Gas-cooled Fast Reactor (GFR) after a Loss of Coolant Accident (LOCA) [2]. The results obtained by the SS- and LS-based sensitivity analysis techniques are compared to those produced by global first- and total-order Sobol indices [39] and [45]. In synthesis, the main contributions of the present paper are the following: • applying the SS and LS methods to embed the sensitivity analysis of the performance of a nuclear passive system within the estimation of its failure probability, while resorting to a reasonably limited number of system model code evaluations: to the best of the authors' knowledge, this is the first time that SS- and LS-based sensitivity analysis methods are applied to nuclear passive systems; • comparing the results obtained by the following approaches to sensitivity analysis: (i) SS-based local and global (reliability) sensitivity analyses, (ii) LS-based local (reliability) sensitivity analysis and (iii) “classical” variance-based global sensitivity analysis relying on the computation of Sobol indices; • challenging approaches (i)–(iii) mentioned above in problems where the failure region of the passive system is composed by multiple, disconnected parts. The reminder of the paper is organized as follows. In Section 2, a snapshot on the functional failure analysis of T–H passive systems is given. In Section 3, the SS and LS methods here employed for efficiently embedding the sensitivity analysis of the performance of a nuclear passive system within the estimation of its functional failure probability are presented. In 4 and 5, the case studies concerning the Ishigami function and the passive cooling of a GFR are presented, together with the corresponding results. Finally, conclusions are provided in Section 6.

نتیجه گیری انگلیسی

The assessment of the functional failure probability of T–H passive systems can be performed by sampling the uncertainties in the system model and parameters, and simulating the corresponding passive system response with T–H computer codes. Within this framework, sensitivity analysis has two objectives: (i) the quantification of the importance of the individual uncertain parameters in affecting the performance of the passive system (or, in other words, in determining the functional failure probability of the passive system); (ii) the determination of the contribution of the individual uncertain parameters (i.e., the inputs to the T–H code) to the uncertainty in the outputs of the T–H code. However, since sensitivity analysis relies on multiple evaluations of the T–H code for different combinations of system inputs, the associated computational effort may be prohibitive due to the long-running times of the T–H codes. Thus, in this paper the advanced SS and LS methods have been considered for performing an efficient sensitivity analysis of the performance of a T–H passive system while estimating its functional failure probability by means of a reasonably limited number of T–H code evaluations. Different local and global approaches to sensitivity analysis have been considered and compared with reference to two case studies of literature: the first one involving the Ishigami function [1]; the second one considering the natural convection cooling in a Gas-cooled Fast Reactor (GFR) after a Loss of Coolant Accident (LOCA) [2]. On the basis of the results obtained, the following guidelines and recommendations can be drawn: • with reference to objective (i) above, two options are suggested: 1. in those cases where the analyst is able to get information about the “structure” of the failure region (e.g., one/multiple overlapping/disconnected failure regions, etc.), the concept of local reliability sensitivity analysis based on LS can be embraced (Section 3.2.2). Actually, as demonstrated by Case study 1, the possibility of identifying multiple important directions allows to separate the contributions of (possibly) multiple failure regions to the reliability sensitivity indices (i.e., the partial derivatives of the system failure probability with respect to the moments of the distributions of the uncertain input parameters): this avoids averaging or (even worse) canceling the different contributions, which would provide erroneous and misleading indications. In addition, as demonstrated by Case studies 1 and 2, LS provides much more accurate and precise failure probability estimates that the other simulation methods here considered for comparison (i.e., standard MCS and SS): this allows the analyst to reduce the number of samples (and, thus, of T–H model evaluations) necessary to obtain desired estimation accuracies and precisions (in particular, in those practical cases where the computer codes require several hours to run a single simulation). 2. in those (more realistic) cases where the analyst has no information about the “structure” of the failure region (or, alternatively, information can be obtained at impractical computational costs), the global approach based on SS may represent the optimal choice (Section 3.1.2.2): indeed, as demonstrated by Case study 1, SS is able to automatically identify multiple disconnected failure regions without any input from the analyst. In particular, SS generates a large amount of conditional samples by searching the whole uncertain input space by means of sequential Markov Chain Monte Carlo (MCMC) simulations; by so doing, the entire distribution of the system failure probability conditional on the values of the individual uncertain input parameters is produced: the associated information is relevant from the sensitivity analysis viewpoint because it quantifies how the failure probability of the system would change if a given uncertain input parameter were set to a given value (e.g., if its epistemic uncertainty were reduced). A final remark is in order with respect to the effectiveness of the SS- and LS-based approaches to sensitivity analysis. They present the advantage over other standard techniques of sensitivity analysis of being directly “embedded” in the computation of the system failure probability: the SS and LS algorithms produce the “ingredients” used in sensitivity analyses (i.e., the empirical conditional distributions in SS and the random lines parallel to the important vector in LS) during the simulation that is performed to compute the system failure probability. In other words, while estimating the failure probability of the system, sensitivity analysis results are produced that can be readily visualized for identification and ranking of the most important variables. This is of particular interest in practical cases in which the computer codes require several hours (or even days) to run a single simulation (like in the present case of passive system reliability assessment). • with reference to objective (ii) above, the use of “classical” variance-based techniques (e.g., those relying on the computation of first- and total-order Sobol indices, like in the present paper) is suggested: actually, by construction these methods quantify the proportion of the variance of the system model outputs that can be attributed to the variance of the uncertain input variables. However, two issues must be taken into account for the practical use of these techniques in passive system reliability assessments: 1. the associated computational burden may be prohibitive because thousands or millions of system model evaluations are frequently required for the computation of variance-based (Sobol) indices through Monte Carlo-based techniques; in addition, these techniques cannot be embedded in the estimation of the failure probability of the passive system: thus, the T–H model evaluations necessary for performing the sensitivity analysis have to be added to those carried out for estimating the failure probability, further increasing the computational burden. To overcome this issue, the adoption of fast-running meta-models in substitution of the original (typically long-running) system model codes is strongly advised; 2. care should be taken in the interpretation of the uncertain variable ranking provided by these methods: as demonstrated by Case study 1, the most important contributors to the variability (in practice, the variance) of the system model outputs are not necessarily the most important contributors to system failure (i.e., those parameters that influence most the passive system failure probability). A conclusive remark is in order with respect to the fact that the use of different sensitivity indices may provide different “importance rankings” of the uncertain input variables. This is due to the fact that different definitions of sensitivity indices are based on different quantities of interest for the problem at hand: for example, in the present case of passive system reliability assessment, local reliability sensitivity indices are defined as the partial derivatives of the system failure probability with respect to the moments of the distributions of the uncertain input parameters, whereas Sobol indices quantify the proportion of the variance of the passive system model outputs that can be attributed to the variance of the uncertain input variables. These considerations explain why it is difficult to define a unique ‘composite’ sensitivity index ‘aggregating’ different quantities of interest, because: (i) aggregating quantities that are different in nature (e.g., failure probabilities, variances, …) is not trivial; (ii) joining indices that provide different answers to the sensitivity analysis problem may remove the added value given by the diversity and complementarity of the indices themselves with a possible detrimental effect on the completeness of the analysis.