Iterative learning control based tools to learn from human error
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|27478||2012||8 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Engineering Applications of Artificial Intelligence, Volume 25, Issue 7, October 2012, Pages 1515–1522
This paper proposes a new alternative to identify and predict intentional human errors based on benefits, costs and deficits (BCD) associated to particular human deviations. It is based on an iterative learning system. Two approaches are proposed. These approaches consist in predicting barrier removal, i.e., non-respect of rules, achieved by human operators and in using the developed iterative learning system to learn from barrier removal behaviours. The first approach reinforces the parameters of a utility function associated to the respect of this rule. This reinforcement affects directly the output of the predictive tool. The second approach reinforces the knowledge of the learning tool stored into its database. Data from an experimental study related to driving situation in car simulator have been used for both tools in order to predict the behaviour of drivers. The two predictive tools make predictions from subjective data coming from drivers. These subjective data concern the subjective evaluation of BCD related to the respect of the right priority rule.
Human reliability is be defined as the capacity of the human operators to achieve their required tasks in predefined conditions and to not achieve additional tasks that may damage the system safety (Swain and Guttman, 1983). Human error is the complementary concept: it is the capacity of human operators to not achieve their required tasks in predefined conditions or to achieve additional tasks that may damage the system safety. Human reliability or human error relate then to several principles: the human task analysis and modelling, the erroneous task identification, the error evaluation. A lot of human error analysis methods exist, but most of them have several problems to be solved (Vanderhaegen, 2003): • The predefined conditions concerning the hypotheses to identify and assess human errors are not well-defined. For instance, the related capacity of human operators to achieve tasks can lead to a period of time or an instantaneous time, can take into account the human experience, can include or not the human recovery process. • The measure of this capacity is usually assimilated as a probability of occurrence of a human error. Nevertheless, the conditions with which the probability was assessed are usually not described. The units of the probability are not specified and it seems difficult to compare probabilities that were obtained with different units. For instance, the ratio of the number of human error occurrence upon the number of solicitations of the same tasks cannot be compared with a ratio of the number of human error occurrence upon time unit. When the human error methods consist in assessing the risks associated to the human errors, they include a measure of the human error consequences. Here again, the units taken for the consequence assessment are not always clearly defined. • During the task analysis and the tasks modelling processes, these methods do not take into account all the dependencies between tasks such as functional dependencies (Vanderhaegen et al., 1994), time dependencies (Vanderhaegen, 1999) or causal dependencies (Vanderhaegen, 2004). • Results obtained with the human reliability analysis methods are often not homogeneous (Swain, 1990 and Kirwan, 1997), are limited to unintentional human errors and are off-line methods (Vanderhaegen, 2003). • Moreover typology of errors taken into account by the majority of methods deals with lapses, mistakes and faults. These kinds of errors are unintentional errors; violations are rarely taken into account. • Comparison between a priori risk analysis and a posteriori analysis reveals some differences. These differences may be explain by violations commissions and a gap between the conditions of use of the system at the design stage and the real conditions of use constrained by the context (evolution of productivity demand, variability of crew, etc.) (Amalberti, 2001, Rasmussen, 1997 and Fadier and la Garza, 2007). In the literature dedicated to car-driving violations, the main approach is the statistical classification: different studies try to find the main characteristics of a driving leading to commission of violation. These characteristics are the gender, the age, mean age and psychological aspects such as sensation seeking, etc. (Lucidi et al., 2010). It is a classification way and cannot be used for on-line prediction. This paper consists in proposing an on-line approach to predict intentional human error without taking into account any probability. This approach is based on the iterative learning control concept. The iterative learning control concept was initially used to learn from errors when achieving automated repetitive tasks (Lee et al., 2000, Xu and Yan, 2004, Xu et al., 2004, Chien and Yao, 2004 and Norrlöf and Gunnarsson, 2005). It is adapted to develop a prediction system and to test the feasibility to predict particular intentional human errors called barrier related violations or barrier removals by taking into account the consequences of these human erroneous behaviours. A barrier is a human or technical system that aims at protecting the human–machine system from the occurrence or the consequences of undesirable events. It can be material such as a wall, or immaterial such as a procedure. A barrier is usually attached to a function. For instance, safety barriers are designed to protect the system from unsafe events and reliability barriers are designed to protect the system from unreliable events. Sometimes, human operators on field decide to not respect these barriers: this kind of erroneous behaviour is called barrier removal (Polet et al., 2009). The barrier removal concept was already studied and observed for different application domains: • Barrier removal during the use of production system such as industrial rotary press (Polet et al., 2002). • Barrier removal during the control of transport system such as car driving (Chaali-Djelassi and Vanderhaegen, 2006) or train control (Vanderhaegen et al., 2002 and Polet and Vanderhaegen, 2007). • Barrier removal of biomechanical applications such as human behaviour in crash context (Robache et al., 2006 and Pacaux-Lemoine and Vanderhaegen, 2007). These studies propose that the human decision to respect or not a barrier depends on 3 attributes: the benefits, the costs and the potential deficits associated to the non-respect of the barrier (Polet et al., 2002).
نتیجه گیری انگلیسی
This paper proposed two approaches based on the iterative learning control concept in order to predict the human behaviour coping with barrier. These tools are supported by a BCD modelling. The first approach builds iteratively the utility function that models the human decision making. After each iteration, the weights associated to the different elements of the utility function (i.e., benefits, costs and deficit) are reinforced. For the second approach the inputs are the same and the tool predicts the behaviour of the driver by reinforcing the database knowledge on the human behaviour in terms of BCD parameters. Both learning system are tested with data from subjective BCD assessment by car drivers in a simulated environment and the correct prediction rate is higher than 80% at the last iteration. The BCD modelling is the relevant support for both approaches. The first approach takes into account intra-individual differences whereas the second one involves the inter-individual differences. Such variability between humans is important because the preferences on decisional criteria may change over time. Therefore, future studies may integrate other kind on parameters for the BCD values such as the human preferences, the priority of choice, the probability of success or failure of a barrier removal, etc. A future system will be developed integrating both reinforcement strategies presented on this paper: (1) reinforcement regarding the predicted and the real output, and (2) reinforcement regarding the current and the previous knowledge. Moreover, the application of such approaches may be extended to other driving situations such as the respect of stop or other road signals. The future application of such work concerns the possible design of car driving support systems and more globally safety measures for transportation. A first insight is then to study the influence of on-board alerting of near-violation commission on the driver decision. A second insight concerns the communication between cars. A car may alert another one that a violation will be certainly committed. In this case safety behaviour can be activated (braking, avoidance manoeuver, etc.).