The identification and representation of the implications of uncertainty is widely recognized as a fundamental component of analyses of complex systems [1], [2], [3], [4], [5], [6], [7], [8], [9] and [10]. The study of uncertainty is usually subdivided into two closely related activities referred to as uncertainty analysis and sensitivity analysis, where (i) uncertainty analysis involves the determination of the uncertainty in analysis results that derives from uncertainty in analysis inputs and (ii) sensitivity analysis involves the determination of relationships between the uncertainty in analysis results and the uncertainty in individual analysis inputs.
At an abstract level, the analysis or model under consideration can be represented as a function of the form
equation(1.1)
y=y(x)=f(x),y=y(x)=f(x),
Turn MathJax on
where
equation(1.2)
x=[x1,x2,…,xnX]x=[x1,x2,…,xnX]
Turn MathJax on
is a vector of uncertain analysis inputs and
equation(1.3)
y=[y1,y2,…,ynY]y=[y1,y2,…,ynY]
Turn MathJax on
is a vector of analysis results. Further, a sequence of distributions
equation(1.4)
D1,D2,…,DnXD1,D2,…,DnX
Turn MathJax on
is used to characterize the uncertainty associated with the elements of x, where Di is the distribution associated with xi for i=1, 2,…,nX. Correlations and other restrictions involving the elements of x are also possible. The goal of uncertainty analysis is to determine the uncertainty in the elements of y that derives from the uncertainty in the elements of x characterized by the distributions D1,D2,…,DnX and any associated restrictions. The goal of sensitivity analysis is to determine relationships between the uncertainty associated with individual elements of x and the uncertainty associated with individual elements of y.
A variety of approaches to uncertainty and sensitivity analysis are in use, including (i) differential analysis, which involves approximating a model with a Taylor series and then using variance propagation formulas to obtain uncertainty and sensitivity analysis results [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23] and [24], (ii) response surface methodology, which is based on using classical experimental designs to select points for use in developing a response surface replacement for a model and then using this replacement model in subsequent uncertainty and sensitivity analyses based on Monte Carlo simulation and variance propagation [25], [26], [27], [28], [29], [30], [31], [32], [33], [34] and [35], (iii) the Fourier amplitude sensitivity test (FAST) and other variance decomposition procedures, which involve the determination of uncertainty and sensitivity analysis results on the basis of the variance of model predictions and the contributions of individual variables to this variance [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52], [53], [54] and [55], (iv) fast probability integration, which is primarily an uncertainty analysis procedure used to estimate the tails of uncertainty distributions for model predictions [56], [57], [58], [59], [60], [61] and [62], and (v) sampling-based (i.e. Monte Carlo) procedures, which involve the generation and exploration of a probabilistically based mapping from analysis inputs to analysis results [63], [64], [65], [66], [67], [68], [69], [70], [71], [72] and [73]. Additional information on uncertainty and sensitivity analysis is available in a number of reviews [69], [70], [74], [75], [76], [77], [78], [79] and [80]. The primary focus of this presentation is on sampling-based methods for uncertainty and sensitivity analysis.
Sampling-based approaches for uncertainty and sensitivity analysis are very popular [81], [82], [83], [84], [85], [86], [87], [88], [89], [90], [91], [92], [93], [94], [95] and [96]. Desirable properties of these approaches include conceptual simplicity, ease of implementation, generation of uncertainty analysis results without the use of intermediate models, and availability of a variety of sensitivity analysis procedures [67], [69], [76], [97] and [98]. Despite these positive properties, concern is often expressed about using these approaches because of the computational cost involved. In particular, the concern is that the sample sizes required to obtain meaningful results will be so large that analyses will be computationally impracticable for all but the most simple models. At times, statements are made that 1000 to 10,000s of model evaluations are required in a sampling-based uncertainty/sensitivity analysis.
In this presentation, results obtained with a computationally demanding model for two-phase fluid flow are used to illustrate that robust uncertainty and sensitivity analysis results can be obtained with relatively small sample sizes. Further, results are obtained and compared for replicated random and Latin hypercube samples (LHSs) [63] and [73]. For the problem under consideration, random and LHSs of size 100 produce similar, stable results.
The presentation is organized as follows. The analysis problem is described in Section 2. Then, the following topics are considered: stability of uncertainty analysis results (Section 3), stability of sensitivity analysis results based on stepwise rank regression (Section 4), use of coefficients of concordance in comparing replicated sensitivity analyses (Section 5), sensitivity analysis based on replicated samples and the top down coefficient concordance (Section 6), sensitivity analysis with reduced sample sizes (Section 7), and sensitivity analysis without regression analysis (Section 8). Finally, the presentation ends with a concluding discussion (Section 9).