برآورد کرنل غیر پارامتری برای تجزیه ANOVA و تجزیه و تحلیل حساسیت
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
27292 | 2014 | 9 صفحه PDF |

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Reliability Engineering & System Safety, Volume 130, October 2014, Pages 140–148
چکیده انگلیسی
In this paper, we consider the non-parametric estimation of the analysis of variance (ANOVA) decomposition, which is useful for applications in sensitivity analysis (SA) and in the more general emulation framework. Pursuing the point of view of the state-dependent parameter (SDP) estimation, the non-parametric kernel estimation (including high order kernel estimator) is built for those purposes. On the basis of the kernel technique, the asymptotic convergence rate is theoretically obtained for the estimator of sensitivity indices. It is shown that the kernel estimation can provide a faster convergence rate than the SDP estimation for both the ANOVA decomposition and the sensitivity indices. This would help one to get a more accurate estimation at a smaller computational cost.
مقدمه انگلیسی
The main task of sensitivity analysis (SA) [1] and [2] is to clarify and quantify the relationship between the distribution of output y and the distribution of an input x i under a given model y=g(x1,x2,…,xd)y=g(x1,x2,…,xd), where y is a random variable and x=(x1,x2,…,xd)x=(x1,x2,…,xd) is a d -dimensional random vector. This relationship depends not only on the distribution of the inputs but also on the structure of the given model. The analysis of variance (ANOVA) decomposition [3], [4], [5] and [6] expresses a kind of structure on the set of multi-dimensional functions from the space L2L2 of square integrable functions, so it tends to become a basic point of departure of SA [7] and [8]. The ANOVA decomposition of y=g(x1,x2,…,xd)y=g(x1,x2,…,xd) is an expression [3] and [5] equation(1) View the MathML sourcey=g0+∑igi(xi)+∑i<jgij(xi,xj)+⋯+g12⋯d(x1,…,xd) Turn MathJax on where g0=E(y)gi=E(y|xi)−g0gij=E(y|xi,xj)−gi−gj−g0⋮g0=E(y)gi=E(y|xi)−g0gij=E(y|xi,xj)−gi−gj−g0⋮ Turn MathJax on The decomposition of the variance, V, of model output y is based on the ANOVA decomposition [4] and [6]: equation(2) View the MathML sourceV=∑iVi+∑i<jVij+⋯+V12⋯d Turn MathJax on where View the MathML sourceVi=Var[E(y|xi=xi⁎)]Vij=Var[E(y|xi=xi⁎,xj=xj⁎)]−Vi−Vj⋮ Turn MathJax on Normalizing by the unconditional variance V, the corresponding sensitivity indices (main effect) are defined as [4] equation(3) View the MathML sourceSi=ViVSij=VijV⋮ Turn MathJax on And the total sensitivity index (total effect) is defined as [4] equation(4) View the MathML sourceSTi=Si+∑jSij+∑j<kSijk+⋯ Turn MathJax on The main effect and the total effect are the two most popular variance based sensitivity measures [1] and [9]. The main effect Si represents the average output variance reduction that can be achieved when xi is fixed, while total effect STi stands for the average output variance that would remain as long as xi stays unknown, that is, the total contribution of xi to the output variation. The difference between STi and Si denotes the degree of interaction between this input and other inputs. There are clear links between the variance based SA and the ANOVA decomposition of model. As an approximation, the decomposition expression can be used to compute variance based sensitivity indices in place of the original model. To get this decomposition of the model, an obvious way is to estimate the class of the functions E(y|xI)E(y|xI), where xIxI denotes a group of inputs indexed by II and II denotes a subset of {1,2,…,d}{1,2,…,d}. Clearly, the estimation of E(y|xI)E(y|xI) provides an approach for both model approximations and sensitivity estimations. The State-Dependent Parameter (SDP) method is applied for this purpose with the point of view of non-parametric smoothing [9] and [10]. And it is first suggested by Young [11] and [12]. The estimation is performed with the help of the classical recursive Kalman filter and associated fixed interval smoothing algorithms. Pursuing the point of view of non-parametric smoothing, kernel estimation is also a good choice for E(y|xI)E(y|xI). The kernel-based method is one of the most popular non-parametric estimators [13] and [14]. The kernel method is first introduced by Rosenblatt [15] for density estimation. Nadaraya [16] and Watson [17] independently proposed nonparametric estimators of E(y|xI)E(y|xI) based on the kernel method. Rosenblatt [18] obtained the bias, variance and asymptotic distribution of those kernel estimation. And the corresponding analysis of convergence rate is considered by Stone [19] and [20], Ibragimov and Hasminski [21] and Yatracos [22] and [23]. Related problems concerning optimal convergence rates for kernel estimation of various functionals of E(y|xI)E(y|xI) are discussed in Fan [24] and [25], Donoho and Low [26], Efromovich and Low [27], and Eubank [14]. One of the key issues of the kernel method is the optimal choice of the bandwidth as a most important parameter of kernel estimation. In this work, we first consider the ANOVA decomposition of y=g(x1,x2,…,xd)y=g(x1,x2,…,xd) by using a kernel estimation of E(y|xI)E(y|xI), then the variance based SA is discussed. The remainder of this paper is organized as follows. Section 2 discusses the kernel estimation of the ANOVA decomposition, and the high order kernel method is introduced to further improve the convergence rate. Section 3 considers the numerical error analysis for both SDP method and kernel method. In Section 4, we investigate the kernel estimators of sensitivity indices on the basis of the kernel estimator of ANOVA decomposition. Specially, we obtain the asymptotic convergence rate of these estimators of sensitivity indices. In 5 and 6, we provide detailed analyses for the Ishigami test function and the Sobol g-function, and we also have a comparison of convergence rate between SDP method and kernel method. Conclusions are offered in Section 7.
نتیجه گیری انگلیسی
In this study, the non-parametric kernel method is applied for the estimations of both the ANOVA decomposition and the sensitivity indices pursuing the point of view of non-parametric smoothing. We would like to conclude this review work by the following conclusions: 1. Like the SDP estimation, the kernel estimation is also a good choice for the ANOVA decomposition. The kernel estimation is faster than the SDP estimation in the sense of average UB and average MAE and is as fast as the SDP estimation in the sense of average RMSE. 2. Theoretically, a high order kernel estimator reflects a high asymptotic rate of convergence, but a high asymptotic rate of convergence does not necessarily reflect a high rate of convergence on a limited sample size in practice. 3. Likewise, the kernel estimation is a good choice for the sensitivity indices. On the basis of the kernel technique, the asymptotic convergence rate, O(N−1/2)O(N−1/2), is obtained for the estimators of sensitivity indices. Though for the great majority of the sensitivity indices, the error is much smaller than the theoretical rate. It is still an open problem whether this rate can be further improved. 4. From the analysis of the Ishigami test function, the kernel estimation is more efficient than the SDP estimation for estimating sensitivity indices. For the kernel estimation, the convergence rate is almost O(N−1/1.05)O(N−1/1.05); and for the SDP estimation, the convergence rate is almost O(N−1/1.30)O(N−1/1.30).