مدل سازی عدم قطعیت و تجزیه و تحلیل حساسیت پایداری تونل
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|26367||2010||9 صفحه PDF||سفارش دهید||6816 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Structural Safety, Volume 32, Issue 6, November 2010, Pages 402–410
This paper proposes an approach to the choice and evaluation of engineering models with the aid of a typical application in geotechnics. An important issue in the construction of shallow tunnels, especially in weak ground conditions, is the tunnel face stability. Various theoretical and numerical models for predicting the necessary support pressure have been put forth in the literature. In this paper, we combine laboratory experiments performed at the University of Innsbruck with current methods of uncertainty and sensitivity analysis for assessing adequacy, predictive power and robustness of the models. The major issues are the handling of the twofold uncertainty of test results and of model predictions as well as the decision about what are the influential input parameters.
This article addresses the question of model choice and model adequacy in engineering design, especially in geotechnics. Experimental and mathematical methods will be combined to achieve this task. In fact, various types of simplifications and assumptions have to be introduced in geotechnical calculations. This can lead to different models for the same geotechnical problem. These models do not predict the same system behavior in general. The question arises how to assess adequacy, predictive power and robustness of the models. We set out to investigate this issue using laboratory data on the one hand and methods from uncertainty analysis on the other hand. Predictive power can be assessed by comparison of experimental results and theoretical prediction. Here the uncertainty lies in the experimental results, the input data of the models and the propagation of uncertainty to the theoretical output. Robustness and adequacy of the models can be best understood by means of sensitivity analysis , ,  and . When combining experimental data and theoretical models, sampling based sensitivity analysis – with its recently developed powerful statistical indicators – suggests itself as a suitable approach , , , ,  and . As a further important tool in the assessment of the joint uncertainty of the model parameters, we employ bootstrap resampling techniques , ,  and . The construction of shallow tunnels is an engineering challenge up to the present day. Tunnels with low cover are often headed using the shield technique. In this context the face stability is an important issue. In order to minimize settlements at the ground surface and to prevent failure of the soil ahead of the face, the tunnel face must be supported. It has been a long-standing topic of research how to predict the necessary support pressure for shield tunnelling. A variety of theoretical and numerical models for estimation of the minimum required support pressure have been proposed. The theoretical approaches can be subdivided into kinematic approaches with failure mechanisms (e.g. , , , , , , , , , , ,  and ) and static approaches with admissible stress fields (e.g.  and ). Some additional approaches are neither purely kinematic nor purely static  and . We will use some of these models to exemplify the proposed strategy for assessing the predictive power of a geotechnical model. Experimental investigations of face stability range from experiments at single gravity, so-called 1g-model tests (e.g. , ,  and ) to centrifuge tests at multiples of g (e.g. , , , ,  and ). Large scale tests are rare (e.g. ). We use a series of 1g-model tests  and  for comparison with the prediction of the chosen theoretical models. In the theoretical models under scrutiny, the output parameter was the necessary support pressure ps. The input (soil) parameters possessing the largest degree of random variability were identified as the actual void ratio e and the loose and dense state void ratios el and ed, respectively. All these parameters were estimated in small-scale laboratory experiments. Another important model parameter is the friction angle φ of the soil. This parameter is estimated by means of a linear model φ ≈ β0 + β1Id, with the relative density Id (which in turn is a function of e, el and ed). In order to assess the influence of the regression coefficients β0, β1 on the output ps, we needed to determine their statistical distribution. We achieved this by means of the so-called resampling technique, producing a large bootstrap sample of the experimental data and thereby simulating the joint distribution of β0 and β1. We believe that this is a novel method for obtaining joint distributions – including correlations – of geotechnical data. As a first application of the statistical data model, we could assess the ranges of the output parameter ps by means of the First-Order-Second-Moment-Method and compare them with the experimental results. The model with the best fit was then scrutinized further: we calculated the sensitivities of the output ps with respect to the five input parameters described above. Here we used Monte Carlo simulation based on the input distributions obtained before. Going beyond the rather crude picture obtained by scatterplots, we computed stronger statistical measures of sensitivity, such as partial correlation coefficients. These indicators are designed so as to remove hidden influences of co-variates. In addition, this method lends itself to a further application of resampling, allowing to determine the statistical significance of the resulting sensitivities. Further, these methods are applicable in numerical models as well – accordingly, we included a Finite Element calculation in our list of models. In short, the goal of the paper is to propose an approach to model choice and model assessment with the aid of a typical application in geotechnics. Experiments play a twofold role here. On the one hand, 1g-model tests are performed to investigate the behavior of a tunnel face close to failure. On the other hand, the outcome of these tests are contrasted with the predictions of theoretical models. These theoretical models contain material parameters that in turn are determined from (different) experiments. Thus both the outcome of the 1g-model tests and the predictions are uncertain. In the presence of this twofold uncertainty, the assessment of model quality requires sophisticated methods from data analysis and uncertainty analysis. Bootstrap resampling techniques are used to assess the statistical distributions of the input parameters, resulting in variability intervals for the model predictions that can be compared with confidence intervals of the test results. This enables a comparison of the range of the predicted output with the range of the measured output and thus allows to assess the model quality. We take one further step, once the fittest model has been chosen. This step is sensitivity analysis, determining a ranking of the input parameters according to their influence on the model output. The significance of the ranks is again assessed by bootstrap methods. Highly influential input parameters should be known more precisely than less influential parameters. This aids in deciding where to focus the effort in further experimental, in situ or laboratory investigations. In addition, if the influence of an input parameter is classified as non-zero, this supports the structure of the model that considers it as a factor to be accounted for. The paper is organized as follows. In Section 2, we briefly present the theoretical models under investigation. In Section 3, we describe the experimental set-up. Section 4 is devoted to uncertainty and sensitivity analysis. We formulate the statistical data models and explain how joint distributions were obtained by resampling. Then we do the FOSM calculation that leads to an overall assessment of the predictive power of the models. This section concludes with the sensitivity analysis based on Monte Carlo simulations and the statistical indicators mentioned above. In Section 5, we discuss the Finite Element model and the corresponding uncertainty/sensitivity analysis. The final section summarizes our conclusions. The methods of uncertainty/sensitivity analysis are based on our earlier paper ; for a general survey of sampling based sensitivity analysis we recommend .
نتیجه گیری انگلیسی
In this paper, we demonstrated how statistical estimators in conjunction with laboratory experiments can be used to assess the predictive power and robustness of models in geotechnical engineering. We performed a case study with a number of theoretical and numerical models for the required support pressure for shield tunnelling. Ultimately, this may serve as a basis for determining the safety of the tunnel face during construction and for gaining deeper understanding of the relevant mechanisms leading to failure. In particular, we showed that novel statistical methods constitute a powerful tool for identifying the most influential model parameters, and we showed that a comparison of models is possible in terms of analyzing the uncertainty of the model answer and contrasting it with test results, which also contain uncertainties. It turned out that bootstrapping is a suitable method for producing samples of multi-dimensional random variables and for assessing the significance of the computed sensitivities. This is particularly convenient in a Finite Element analysis, because it does not require additional computational cost beyond the original Monte Carlo simulation. We remark that the sensitivities obtained in the Vermeer–Ruse model and the FE-model turned out to be quite similar. This may be explained by the fact that the Vermeer–Ruse model was originally calibrated on a similar FE-model.