مدل سازی و تعیین کمیت وابسته به اشتباهات تکرارپذیر انسان در تجزیه و تحلیل سیستم و ارزیابی ریسک
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|27847||2001||10 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 6571 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Reliability Engineering & System Safety, Volume 71, Issue 2, February 2001, Pages 179–188
General equations and numerical tables are developed for quantification of the probabilities of sequentially dependent repeatable human errors. Such errors are typically associated with testing, maintenance or calibration (called “pre-accident” or “pre-initiator” tasks) of redundant safety systems. Guidance is presented for incorporating dependent events in large system fault tree analysis using implicit or explicit methods. Exact relationships between these methods as well as numerical tables and simple approximate methods for system analysis are described. Analytical results are presented for a general human error model while the numerical tables are valid for a specific Handbook (THERP) model. Relationships are pointed out with earlier methods and guides proposed for error probability quantification.
When maintenance is performed by one person or a crew for several identical components consecutively, there is a risk that an error made in one task is repeated in subsequent similar tasks. The conditional probability of repeating an error can be considerably larger than the probability of the first error. Typical errors subject to this kind of dependency are valves or switches left in a wrong position, calibration errors, or use of incorrect fuel, lubricant or additives. These can be detrimental to redundant engineered safety systems that are made of multiple identical parallel trains and periodically tested, maintained or calibrated. The relevance of testing and calibration errors for system safety has been recognised early in probabilistic safety assessments (PSAs). Although such pre-initiator errors have not been dominating the risk in typical probabilistic safety assessments, closed redundant valves in the auxiliary feedwater system were immediate causes for the Three Mile Island 2 — accident in 1979. Less dramatic but significant events appear in many operating experience reports published by IAEA and WANO. Several suggestions have been made in the past for quantification of the error probabilities for PSA , , ,  and . However, a review shows that rarely if ever have they been used in a complete and correct form in a large system analysis. No exact and specific guidance seems to be available for incorporating dependent human errors in real system analysis using large fault trees with hundreds of hardware failure events combined with human errors. The present paper develops general useful equations and numerical data for quantification and shows how the dependent human errors may be correctly handled in fault tree analysis. The focus is on system analysis and on development of numerical values for system model probabilities, rather than on human factors, individual error probabilities or empirical data. The subject of this paper is a task cycle consisting of a sequence of n tasks. The cycle is isolated from other cycles (if any) so that the first task can be considered an independent task. The probability of errors in the following tasks may depend on the outcome (success or error) of the first task. The strength of this coupling depends on how well the tasks are separated in terms of time and location, personnel, etc. Assumptions: 1. An error made in performing a task results in a functional failure of a component. The probability of error in the first task is q0=P(H1). 2. Homogeneity when n>2: The tasks are identical, carried out to identical components, equally separated in space and time, all carried out by the same crew with identical tools (or all by different crews or tools) and all environmental and stress conditions are equal a priori. This external a priori homogeneity does not prevent probabilistic dependencies based on actual outcomes (errors or successes) of the tasks. 3. The probability of an error in the next task depends on the number of consecutive errors immediately preceding the task, but not on any earlier events (errors or successes). This means, for example, that and 4. After any success, the probability of an error in the next task is always the same, xq0, independent of events before that success. This means that etc. The importance of the numerical value of x will be demonstrated with examples and by comparisons with earlier models. 5. Standard probability theory (incl. rules of conditional probabilities). Earlier models are reviewed in Section 1.1. Common to all these models is the assumption that events before the latest successful task have no effect on the error probabilities. In the present paper explicit equations are derived for all joint probabilities needed in the quantification of large system fault trees, in a general case (arbitrary, x, q0, q1, q2, etc.) for n=1,2,3 and 4. Such exact probabilities are derived in Section 2 to facilitate an implicit method for large system fault tree quantification. System unavailability equations due to human errors are presented in Section 3. Related simplified methods for system analysis are reviewed and recommended. Expressions for the basic event probabilities of an exact explicit method for large system fault tree modelling and quantification are derived in Section 4. Numerical examples and tables for both implicit and explicit methods are given, thereby facilitating effective use of the Handbook  data in system analysis. Summary conclusions are presented in Section 5. 1.1. Relationships with earlier models In their analytical work on m-out-of-n:G redundant standby systems Apostolakis and Bansal  used general probabilities qj as defined in the Nomenclature, without any specific relationships between them. For the error probability of any task following a success they assumed independent of events before This corresponds to a special case (x=1) of the present paper. It follows from the total probability theorem that which is different from P(H1)=q0 whenever q1≠q0. Thus, the individual unconditional event probabilities are different even if the tasks are the same and made for identical components. Samanta and Mitra  defined similar conditional probabilities and proposed recursive equations leading to the expressions where 0≤k≤1. This increasing set of conditional probabilities is feasible and supported by some observations [3, Appendix B]. However, the other two assumptions mentioned in Ref. [3, p. A-5 and 6], P(Hi)=q0 and for i≥1, can be simultaneously true only if there is no dependency (k=0). Theorem 1. if and only if there is no dependency, i.e.q1=q0. Proof. This theorem follows directly from the total probability theorem presented above. One should notice that it is valid for any model: it does not depend on q2, q3, etc., or on any rules about how events before a success may influence future probabilities. □ Deleting the assumption P(Hi)=q0 for i≥1 makes this model  consistent with a special case (x=1) of the present paper. Swain and Guttman [4, Table 10-2] made the following assumptions about dependencies: equation(1) equation(2) where K=∞, 19, 6, 1 and 0 for dependency levels Zero (ZD), Low (LD), Medium (MD), High (HD) and Complete (CD), respectively. Numerical values for q0 are suggested in reference guidebooks and depend on recovery factors such as alarms or double-checking of valve and switch positions after tests and maintenance. These modify the nominal basic error of omission probability q0=0.03.  and  also give guidelines for the assessments of the dependency level, depending on the degree of separation between tasks in terms of time, space and administrative controls. Under the homogeneity Assumption 2 the dependency level (and K) is the same between any consecutive tasks (see Ref. [4, Tables 10-1]). For i=1 Eq. (1) yields q1 and K can be expressed as K=(1−q1)/(q1−q0). Developing Eq. (2) then yields equation(3a) Using the total probability theorem repeatedly with and one can show that P(H2)=P(H3)=⋯=P(Hn)=q1 and equation(3b) These are rather obvious since only the immediately preceding task has any effect on the future in and . When reference is made to the “Handbook case” in the numerical results later on, it means that and are used for the definition of x and q2,q3,…,qn−1. One can also consider an “optimistic case” when any success creates a perfect mindset so that no more errors occur, i.e. x=0 and independent of events before The “conservative case” refers to x=1. However, in the following sections analytical results are first developed with the general model with arbitrary x, q0, q1, q2, etc., not restricted to any of these special cases.
نتیجه گیری انگلیسی
This paper reviewed and generalised some earlier models developed for quantification of dependent human error probabilities, and methods for analysing systems subject to such errors. General exact equations were developed for the probabilities of combinations of human errors. Implicit and explicit methods were described for taking the errors correctly into account in large system reliability analysis. Analytical results were obtained for m-out-of-n:G system unavailabilities and used for developing simplified approximate methods for system analysis. The explicit method leads to larger fault tree models than the implicit method, and more complex calculations of the event probabilities. On the other hand, most computer codes can automatically handle large explicit models, and the rare-event approximation is somewhat more accurate in the explicit method than in the implicit method. Numerical tables were developed for the joint probabilities of the implicit method, and for the probabilities of the virtual events of the explicit method. The numbers clearly point out the non-identity of components with respect to human errors. The tables make correct system quantification feasible under the Handbook case. The main area of applications in mind for the results has been the unavailability analysis of redundant standby safety systems consisting of n parallel trains that are periodically tested, calibrated or maintained. When one work cycle is short compared to the interval of cycles, a repeated error can remain undetected for a whole interval. Then the probabilities obtained here are relevant input to system models. The current formalism applies also for calculating the probability of a plant transient (initiating event) if such is caused by repeated operator errors. If the tasks of a maintenance cycle are staggered over an extended period, it is usually appropriate to consider the tasks mutually independent. If not, one needs to develop the formalism further to account for partial overlapping of the residence times of the errors. The transformation equations between the implicit and explicit model event probabilities ( and and Appendix A) are actually general, valid for any kinds of mutually dependent events Hi in system models, not only for human errors. Future efforts are directed to analysis of common cause (hardware) failures and standby systems in general. Psychological considerations and some observations [3, Appendix B] indicate that some errors might promote errors progressively, q1<q2<q3⋯, while learning could be possible for some error types, leading to q1>q2>q3, etc. The sequential failure model  would be a natural candidate for modelling such sequences. The Handbook equations could also be modified by making K dependent on the number of preceding errors, or using a recursive form qi+1=(1+Kqi)/(1+K). More empirical evidence and model fitting would be needed to validate such assumptions as well as confirm realistic values for the parameter x measuring the power of success. Nevertheless, much of the analytical work provided in this paper could be applied immediately with such models and results.