دانلود مقاله ISI انگلیسی شماره 26362
ترجمه فارسی عنوان مقاله

یک روش جدید برای اعمال اهمیت و تجزیه و تحلیل حساسیت برای درخت های خطا چندگانه

عنوان انگلیسی
A novel method to apply Importance and Sensitivity Analysis to multiple Fault-trees
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
26362 2010 11 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Loss Prevention in the Process Industries, Volume 23, Issue 5, September 2010, Pages 574–584

ترجمه کلمات کلیدی
- اقدامات مهم تجزیه و تحلیل حساسیت - درخت های خطا - بهبود طراحی - ایمنی فرایند -
کلمات کلیدی انگلیسی
Sensitivity analysis importance measures, Fault-trees, Design improvement, Process safety,
پیش نمایش مقاله
پیش نمایش مقاله  یک روش جدید برای اعمال اهمیت و تجزیه و تحلیل حساسیت برای درخت های خطا چندگانه

چکیده انگلیسی

The unavailability/frequency analysis of critical failure states of complex industrial systems is normally conducted by using the Fault-tree methodology. The number of Fault-trees describing the system is given by the number of system’s failure states (i.e. Top-events). For each Top-event characterised by unacceptable occurrence probability, some design improvements should be made. Importance and Sensitivity Analysis (ISA) is normally applied to identify the weakest parts of the system. By selecting these parts for design improvement, the overall improvement of the system is made more effective. In current practice, ISA is normally applied sequentially to all Fault-trees. The sequence order is subjectively selected by the analyst, based on several criteria as for instance the severity of the associated Top-event. This approach has the clear limitation of not ensuring the identification of the most cost-effective design solution to improve safety. The present paper describes an alternative approach which consists of concurrently analysing all relevant system’s Fault-trees with the objective of overcoming the above limitations and to identify the most cost-effective solution. In addition, the proposed method extends the ISA application to “over-reliable” system functions, if any, on which the reliability/maintainability characteristics of the involved components can be relaxed, with a resulting cost saving. The overall outcome of the analysis is a uniformly protected system, which satisfies the predefined design goals. A point to note is that the overall cost of the analysis of the proposed approach is significantly lower if compared with the sequential case.

مقدمه انگلیسی

Complex systems are normally characterised by a number of dangerous failure states, which are directly associated with possible accident scenarios. The study of these states, by including the determination of their occurrence probability/frequency can be performed by means of various system analysis techniques. The most popular is the Fault-tree Analysis (FTA) (Kumamoto and Henley, 1996 and Vesely, 1970), which allows describing in a systematic way the cause-effect relationships amongst failure events from single components to system. In particular, FTA allows studying the role played by the different failure modes associated with any system’s components, which might have a different impact on the occurrence probability of the system failure state, hereafter indicated as the Top-event. In addition, the quantification of Fault-trees allows the analyst to obtain the information of interest for design improvement. When the estimated failure probability of the Top-event is deemed not acceptable, a design review has to be made with the specific goal of reducing it to an acceptable predefined value. This is normally done by using Importance and Sensitivity Analysis (ISA) (Rausand & Hoyland, 2004) which, combined with the results of FTA, represents a very powerful tool to improve the design of critical systems. ISA is a methodology addressed to envisage the output behaviour of a model as a consequence of the variation of the input variables, with the purpose of identifying input variables that are more significant in term to their contribution to the model output. Referring to FTA, the model’s output is the probability of occurrence of the Top-events. The input variables are all possible failure modes of the system’s components, which, in FTA, are indicated as primary or basic events (BEs). For the purpose of this paper we will refer to component failure modes or BE without distinction. The identification of the weakest components in the system in term of their contribution to risk is the final objective of such an analysis, as it allows identifying those elements that require further design improvement. In general, the definition of importance measures (IMs) (Van der Borst & Schoonakker, 2001) for each BE allows the analyst to assess the relative risk-significance of the associated component in terms of its contribution to the occurrence probability of the Top-event. In particular the failure modes (i.e. BEs) with highest IMs are those most sensitive, giving the maximum increase (or decrease) of the Top-event probability for a given increase (or decrease) of the associated BE probability. These BEs are clearly associated with system functions that are more critical. Once the most “sensitive” failure modes are identified, some system improvements can be made by modifying the design of the associated components. More specifically, a critical component can be substituted either with another component of better quality and/or better maintainability and/or better testing strategy, or with a subsystem where the component has a redundant part (e.g. parallel, stand-by, K out of N). Classically the risk is expressed by a set of triplets (Kaplan & Garrick, 1981): View the MathML sourceR=〈SiPiCi〉i=1,2,…,N Turn MathJax on where Si is a possible accident scenario for the system (Top-event), Pi is its occurrence probability, Ci is the associated consequence, and N is the number of Top-events. Normally, the risk is represented on a log-log diagram, where for each accident scenario an associated risk point is represented (i.e. its occurrence probability vs. consequence). The diagram is subdivided into three areas corresponding to acceptable risk, unacceptable risk and an area in between where risk reduction is desirable (e.g. the ALARP or ALARA regions). In general, if the system induced risk is not acceptable, the task of the system designer is to “move” the risk points out of the acceptable risk area through the improvement of the system safety and/or the mitigation measures. The present paper focuses on the control of risk through a reduction of the Top-events occurrence probability whilst the activity on consequence reduction, which involves the introduction of mitigation measures, is outside the scope of the present work. In order to reduce the Top-event occurrence probability it is necessary to introduce structural modifications in the production/control system and/or to improve the protection system functions. Normally, the second option is preferred for safety-related purposes, as the first is strictly linked to the production process and therefore any structural modification would impose a modification within the production line. In other words design modifications of the safety-related functions are generally much less expensive than modifications of the production/control functions. However any modification of the safety system should not compromise the plant availability requirements. When risk reduction is deemed necessary, a specific probability goal has to be defined for each Top-event. The intent is to reduce the occurrence probability of each Top-event in such a way that the corresponding risk point on the log-log diagram is moved outside the unacceptable risk region. In general, the reduction of the Top-event occurrence probability can be obtained by intervening on the primary causes that can lead to the Top-event (i.e. BEs). The most effective approach is to operate on those BEs which contribute most to the probability of occurrence of the Top-event (i.e. those having highest IMs). However, it is important to note that for complex systems some BEs can be present in different Fault-trees and their modification can have different impacts on the Top-events probability. Current approaches to Importance and Sensitivity Analysis are based on the Sequential analysis of the different Fault-trees i.e. given N Fault-trees they are independently analysed one after another. This approach is indicated in this paper as Sequential Importance and Sensitivity Analysis (SISA). The main complication of this approach arises when different Top-events contain common BEs. In such a case, it results that any proposal for the modification of a certain system’s component, which results from the analysis of a certain Top-event, has to be reassessed when performing the analysis of other Fault-trees containing the same component. For this reason the analyst cannot fully assess the actual impact on the overall system safety from a modification resulting from the sensitivity study application conducted on a single Fault-tree at a time. In addition, when some major system’s modification is required (e.g. the use of redundancy), this modification has to be implemented also on other Fault-trees. In general, the overall cost of the analysis might be significant because of repetitions, reiterations and overlapping. These limitations are amplified when considering problems with conflicting requirements, as for instance safety and production loss. Indeed, the reduction of the failure probability of Top-events is generally achieved through the improvement of the safety/control functions which, due to the extensive use of fail-safe components, could lead to a decrease of the system availability. For these reasons Fault-trees are independently analysed. A better trade-off between these two conflicting situations would be the concurrent analysis on all Fault-trees of the system, in which both unavailability and safety functions are taken into account. Indeed, a possible way forward to overcome the limitations of the Sequential approach is to perform Sensitivity Analysis on all Fault-trees concurrently. This approach, which has been called Concurrent Importance and Sensitivity Analysis (CISA), is presented in this paper. The Concurrent Analysis was already implemented in the past ( Contini, Sheer, & Wilikens, 2000). Although it was applied with success to a real system, that method was characterised by some of limitations. The approach here proposed overcomes the drawbacks of the previous implementation and it introduces a selective method to reduce the occurrence probability of each Top-event. In particular, different probabilistic goals are selected for different Top-events, depending on their specific contribution to risk. Another innovative aspect of the proposed approach is that the method is extended also to identify “over-reliable” system functions, if any, on which the reliability/maintainability characteristics of the involved components can be relaxed with consequent cost saving. The overall result of the analysis is a uniformly protected system satisfying the predefined probabilistic goals. Moreover the cost of the analysis is much lower than the cost from using the SISA approach. In order to implement the proposed approach a dedicated software tool was developed (JRC-CISA) which makes use of the JRC-ASTRA software for Fault-tree analysis ( Contini, Fabbri, & Matuzas, 2009). The present paper describes the methodology, and provides a simple example of application.

نتیجه گیری انگلیسی

This paper presented the CISA approach to perform the Importance and Sensitivity Analysis concurrently on all system’s Fault-trees. More specifically, the reduction of the occurrence probability of each Top-event is conduced in a selective manner by considering the estimated risk of the associated scenarios. To achieve this objective, different probability goals are selected for the different Top-events, depending on their specific contribution to risk. The CISA method is based on the definition of Global Importance Indexes for each component of the system that provide a measure of the impact of the reliability properties of the associated component on the potential probability of failure of the overall system. These indexes are then directly used to identify the weakest parts of the system and to select the best candidate components for design improvement. Once the weakest parts of the system are identified, three different types of interventions are possible to improve the system: (i) to use of components of better quality/maintainability, (ii) to substitute the component with a redundant configuration, and (iii) to modify the Fault-tree failure logic to represent the adopted system modification. Another innovative aspect of the proposed approach is that CISA is not solely used to address the most critical components in terms of their contribution to risk, but it also focuses on those less critical components, which may be uselessly reliable. In this way the application of the method is extended to the consideration of functions whose failure probability can be increased without compromising the requirements at Top-event level. The identification of these components may provide a contribution to costs reduction during the design phase, by still satisfying the probabilistic goals at the same time. The simple application exercise executed both by using the SISA and CISA approaches demonstrates that the current sequential approach allows obtaining a design solution which may not necessarily be the most effective as for the concurrent. The final design solution that is obtained by using SISA is indeed very much influenced by the sequence choice according to which the Fault-trees are analysed. In addition, normal practice imposes that the analysis conducted on the k-th Fault-tree in the sequence is not followed by the re-analysis of all the previously analysed Fault-trees. Clearly the only way forward to avoid this problem is to consider different sequences and to re-analyse all previously analysed Fault-trees containing the modified components. This practice is, however, too expensive and time consuming. By contrast this type of difficulty is not present in CISA. Indeed, at each step of the procedure it is possible to select the most “promising” sequence according to the overall gain and cost. The best cost-effective set of modifications can always be identified, whereas this is not guaranteed with the SISA approach. It is clear that the CISA approach is particularly suitable also to face problems of conflicting requirements (e.g. unavailability vs. safety; no-intervention on demand vs. spurious intervention for protective systems) and to find suitable trade-offs. Clearly, CISA is equivalent to SISA when: (i) there is only one critical system failure, i.e. only one Fault-tree; (ii) all Fault-trees are independent, i.e. if there are no common events. By contrast, in all other cases, CISA allows to avoid the limitations of the SISA approach since: • the analyst can immediately spot the evidence of the impact on all Top-events of each design modification adopted; • the determination of components criticality, by means of their Global Importance Indexes, take into account the probabilistic dependence amongst the different Top-events; • the identification of the best design modification does not depend on the Fault-tree sequence; • the overall costs of the CISA analysis is always lower. As a natural follow up of the study presented in this paper, the following aspects present a certain relevance: (i) the extension of the CISA methodology to address catastrophic Top-events, for which the parameter of interest is the accident occurrence probability, expressed in terms of the expected number of failures; (ii) the impact of uncertainty on the reliability parameters, which is fundamental when dealing with the achievement of goals. Clearly these aspects are of paramount importance and they will represent the object of future investigations.