تجزیه و تحلیل حساسیت جهانی با استفاده از درون یابی شبکه پراکنده و هرج و مرج چند جمله ای
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|26644||2012||8 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Reliability Engineering & System Safety, Volume 107, November 2012, Pages 82–89
Sparse grid interpolation is widely used to provide good approximations to smooth functions in high dimensions based on relatively few function evaluations. By using an efficient conversion from the interpolating polynomial provided by evaluations on a sparse grid to a representation in terms of orthogonal polynomials (gPC representation), we show how to use these relatively few function evaluations to estimate several types of sensitivity coefficients and to provide estimates on local minima and maxima. First, we provide a good estimate of the variance-based sensitivity coefficients of Sobol' (1990)  and then use the gradient of the gPC representation to give good approximations to the derivative-based sensitivity coefficients described by Kucherenko and Sobol' (2009) . Finally, we use the package HOM4PS-2.0 given in Lee et al. (2008)  to determine the critical points of the interpolating polynomial and use these to determine the local minima and maxima of this polynomial.
A common task in fitting a model to data is to find parameters p=(p1,…,pn)p=(p1,…,pn) to minimize some cost function, C(p), often a sum of squared differences between model output and experimental data. This is a particularly a difficult task when the dimensionality of the parameter space is large and the dependence of C on p is nonlinear. One approach to this problem is to sample the function at some set of points and try to estimate relevant quantities, such as various types of sensitivity coefficients and the location of local minima, from this sample. Often, these samples are used to construct a simpler model (e.g., linear, polynomial, sum of Gaussians, etc.) that may be used to approximate the original model in a computationally inexpensive way. Such approximate models are described with various terms, including metamodels, surrogate models, response surfaces and model emulators. In settings in which the sampling points are given in advance, common approaches include RS-HDMR, cut-HDMR, ANOVA decomposition, kriging, and moving least squares. In settings in which the sampling points may be chosen at will, two common approaches are sparse grid interpolation and generalized polynomial chaos (gPC) using cubature. In this paper we focus on these last two metamodels, the relationship between them, and their application to computing global sensitivity coefficients and global maxima and minima. More precisely, sensitivity methods can be divided into global (the focus in this paper) and local, while global methods can in turn be divided into screening methods, non-parametric methods, variance-based methods, and moment-independent or density based methods. The classic paper on screening methods is , which details a method for sampling model outputs over a high-dimensional input space in order to estimate the mean and variance of partial derivatives of the output with respect to each input. A number of non-parametric approaches for global SA, including locally weighted regression, additive models, projection pursuit regression, and recursive partitioning regression are detailed in . Further non-parametric methods, along with a description for using these methods to estimate values and confidence intervals for variance-based sensitivity coefficients are given in . An overview of global SA methods is provided in , which also introduces a new, moment-independent importance measure; this measure is discussed also in . Many global SA methods are discussed in . In terms of other metamodels, an overview of Kriging and discussion of bootstrapping to estimate the variance in the Kriging predictor is given in . A discussion of gPC and its application to computing sensitivity coefficients appears in both  and . Another approach to constructing a polynomial metamodel is sparse grid interpolation, which has been used widely in recent years as a means of providing a reasonable approximation to a smooth function, f , defined on a hypercube in RnRn, based on relatively few function evaluations . This method produces a polynomial interpolant using Lagrange interpolating polynomials based on function values at points in a union of product grids of small dimension  and . However, for many purposes, there are computational advantages to a representation in terms of orthogonal polynomials; such a representation is also known as a generalized polynomial chaos (gPC) representation. Most relevant for the discussion here is the efficient calculation of the Sobol' sensitivity coefficients of a polynomial in gPC form. In this paper we start with the efficient conversion from an interpolating polynomial in Lagrange form to the gPC form as described in . We combine this with the efficient calculation of the Sobol' sensitivity coefficients of  and  to produce an efficient algorithm for estimating these coefficients using a relatively small number of function evaluations. As seen in numerical examples, this method is both accurate and efficient for smooth functions when compared with other approaches for estimating these values. We also show how to use the gPC representation to estimate two derivative-based sensitivity measures discussed in . Finally, we discuss the use of polynomial homotopy methods for finding the critical points of the interpolating polynomial . In cases in which the global maximum or minimum does not lie on the boundary of the interpolating hypercube, this allows us to find the global minimum or maximum (within the hypercube) directly. In addition to these applications, we note that sparse grid interpolation likely has applications in the context of other global SA methods as well. We leave this as a topic for future research.
نتیجه گیری انگلیسی
We have demonstrated that sparse grid interpolation followed by conversion to a gPC representation provides an efficient and accurate method for estimating variance-based sensitivity coefficients (including main effect, total effect, and interaction effect coefficients) and derivative-based sensitivity coefficients (including L1 and L2 derivative sensitivity coefficients) for the case of a smooth function. This method provides estimates for all of these coefficients based on function evaluations at a specified set of sparse grid points. Moreover, these estimates have known, good convergence rates that are relatively insensitive to the number of dimensions. We showed numerically that this method gives good estimates even with a relatively small number of points and that it converges faster than quasi-MC and Extended FAST when computing variance-based coefficients. The method given here also converges at a rate comparable to quasi-MC for the derivative-based sensitivity coefficients. Finally, we showed that the gPC representation may be combined with homotopy root finding methods to identify critical points and hence all local maxima and minima of the interpolating polynomial. This provides a useful method for estimating the minima and maxima of the original function. Given the wide range of applications from a single set of function evaluations, the combination of sparse grid interpolation and gPC representation provides a powerful method for exploring the behavior of a smooth function.