یک روش بیزی شبکه هیبریدی برای تشخیص درایور انحراف شناختی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|29279||2014||10 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Transportation Research Part C: Emerging Technologies, Volume 38, January 2014, Pages 146–155
Driver cognitive distraction (e.g., hand-free cell phone conversation) can lead to unapparent, but detrimental, impairment to driving safety. Detecting cognitive distraction represents an important function for driver distraction mitigation systems. We developed a layered algorithm that integrated two data mining methods—Dynamic Bayesian Network (DBN) and supervised clustering—to detect cognitive distraction using eye movement and driving performance measures. In this study, the algorithm was trained and tested with the data collected in a simulator-based study, where drivers drove either with or without an auditory secondary task. We calculated 19 distraction indicators and defined cognitive distraction using the experimental condition (i.e., “distraction” as in the drives with the secondary task, and “no distraction” as in the drives without the secondary task). We compared the layered algorithm with previously developed DBN and Support Vector Machine (SVM) algorithms. The results showed that the layered algorithm achieved comparable prediction performance as the two alternatives. Nonetheless, the layered algorithm shortened training and prediction time compared to the original DBN because supervised clustering improved computational efficiency by reducing the number of inputs for DBNs. Moreover, the supervised clustering of the layered algorithm revealed rich information on the relationship between driver cognitive state and performance. This study demonstrates that the layered algorithm can capitalize on the best attributes of component data mining methods and can identify human cognitive state efficiently. The study also shows the value in considering the supervised clustering method as an approach to feature reduction in data mining applications.
Driver distraction has emerged as a critical risk factor of motor vehicle crashes. Recent data show that 16% of fatal crashes and 21% of injury crashes were attributed to driver distraction in 2008 (Ascone et al., 2009). The increasing use of information technologies in vehicles (e.g., navigation systems, smart phones, and other internet-based devices) will likely exacerbate the problem of distraction. From 2009 to 2010, visible headset cell phone use and visible manipulation of handheld devices while driving increased 50% – from 0.6% to 0.9%. These absolute values may underrepresent the usage of information technologies on road because drivers were observed for only approximately 10 s at sampled roadway sites and might have used technologies that were undetectable outside the vehicle, such as a blue-tooth headset) (NHTSA, 2011). An estimated nine percent of drivers used either hand-free or hand-held phones while driving at a typical daylight moment in 2010 (NHTSA, 2011). Therefore, although drivers benefit from these devices, it is also critical for drivers to avoid distraction and direct an acceptable level of attention to the road. A promising strategy to minimize the effect of distraction is to develop intelligent in-vehicle systems, namely adaptive distraction mitigation systems, which can provide real-time assistance or retrospective feedback to reduce distraction based on driver state/behavior, as well as the traffic context (Lee, 2009 and Toledo et al., 2008). For example, when a driver is faced with an intense negotiation via cell phone in heavy traffic, the adaptive distraction mitigation system can warn the driver and encourage the driver to attend to the road, or in an extreme case, the system can automatically hold the call until the driver can get off the road. Such systems must accurately and non-intrusively detect whether drivers are distracted or not. In this context, distraction can be defined as a diversion of a driver’s attention away from the activities critical for safe driving toward a competing activity (Lee et al., 2008). Detecting driver distraction depends on how distraction changes driver behavior compared to the normal driving without distraction, which can depend on the type of distraction. Considering the nature of attentional resources that distraction competes with driving, visual distraction and cognitive distraction represent two critical types – “eye-off-road” and “mind-off-road” – although they are not mutually exclusive in real driving (Liang and Lee, 2010 and Victor, 2005). Visual distraction relates to whether drivers look away from the road (i.e., on-road or off-road glances) and can be determined by momentary changes of drivers’ eye glances. A general algorithm that considers driver glance behavior across a relatively short period could detect visual distraction consistently across drivers (Liang et al., 2012). However, detecting cognitive distraction is much more complex than visual distraction because the signs of cognitive distraction are usually not readily apparent, are unlikely to be described by a simple linear relationship, and can vary across drivers. Detecting cognitive distraction likely requires an integration of a large number of indicators (e.g., eye gaze measures) over a relatively long time and may need to be personalized for different drivers (Liang et al., 2007b). The challenge is how to integrate performance measures in a logical manner to quantify complex, even unknown, relationship between drivers’ cognitive state and distraction indicators. Data mining methods that can extract unknown patterns from a large volume of data present an innovative and promising approach to this end. In previous studies, two data mining methods—Support Vector Machines (SVMs) and Dynamic Bayesian Networks (DBNs)—successfully detected cognitive distraction from driver visual behavior and driving performance (Liang et al., 2007a and Liang et al., 2007b). SVMs, proposed by Vapnik (1995), are based on statistical learning theory and can be used for non-linear classification. To train binary-classification models, SVMs use a kernel function, View the MathML sourceK(xi,xj)=Φ(xiT)Φ(xj), to map training data from the original input space to a high-dimensional feature space. When the mapped data are linearly separable in the feature space, the hyperplane that maximizes the margin from it to the closest data points of each class produces the minimized upper bound of generalization error and yields a nonlinear boundary in the input space. When the data are not linearly separable in the feature space, the positive penalty parameter, C, allows for training error ε by specifying the cost of misclassifying training instances ( Hsu et al., 2008). The training process of SVMs is to minimize both training error and the upper bounds of generalization error. This method is computationally efficient and minimizes generalization error to avoid over-fitting. SVMs produce more robust models compared to the linear-regression algorithms that minimize the mean square error, which can be seriously affected by outliers in training data. Tested with the data collected in a simulator study, SVMs detected cognitive distraction with an average accuracy of 81%, outperforming traditional logistic regression method. The cognitive distraction was defined by the experimental conditions: either the drive when drivers drove under cognitive distraction or the drive without distraction. Nonetheless, SVMs do not consider time-dependent relationship between variables, and the resultant models do not present the relationships learned from data in an interpretable way. Bayesian Networks (BNs) represent a probability-based approach and can be presented graphically (depicted in Fig. 1): nodes depicting random variables and arrows depicting conditional dependencies between variables. For example, the arrow between variable nodes H and S indicates that S is independent of all variables other than H. Dynamic BNs, one type of BNs, can model a time-series of events according to a Markov process ( Fig. 1b). The training process of BN models included structure learning and parameter estimation. Structure learning identifies the possible connections between nodes in a BN, whereas parameter estimation identifies the conditional probabilities for those connections ( Ben-Gal, 2007). Compared with SVMs, DBNs are easy to interpret, can consider time-dependent relationship between cognitive state and distraction indicators, and obtain more accurate and sensitive models ( Liang and Lee, 2008). However, DBNs are not computationally efficient, needing an average 20 min of processing time to train a model, compared to 15 s to train a SVM model with the same training data. Full-size image (18 K) Fig. 1. Two examples of Bayesian Networks. Figure options To obtain accurate, efficient, and interpretable distraction detection algorithms, we combined DBNs and a feature reduction method (e.g., clustering) in a hierarchical manner (Fig. 2). The hierarchical structure has been demonstrated to be effective in some other detection systems that need to integrate a number of variables, similar to the detection of cognitive distraction. Veeraraghavan et al. (2007) combined an unsupervised clustering method and a binary Bayesian eigenimage classifier in a cascade fashion to identify driver activities in vehicles from computer vision data. Another study combined a Dynamic Bayesian Clustering and a SVM model in sequence to forecast electricity demand (Fan et al., 2006). These models have two layers. The lower-layer model summarizes basic measures into more abstract characteristics of the target so that the higher-layer model classifies example with fewer indicators. This approach can reduce the computational load and make contributions of model inputs interpretable relative to the ultimate classification. Full-size image (66 K) Fig. 2. The structure of the layered algorithm. The curving, solid arrows indicate data flow. The straight, lined arrows in the DBN algorithm indicate associations between variables. Figure options Our approach uses supervised clustering models at the lower layer to identify feature behaviors associated with cognitive distraction (i.e., clusters) based on a number of performance measures. Supervised clustering methods are built upon the concept of traditional unsupervised clustering methods, but extend the concept by giving some directions (i.e., supervised) in the blind search for the structure among instances, in a manner analogous to Partial Least Squares as a supervised version of Principal Component Analysis. At the higher-layer, a DBN model uses the labels of these feature behaviors as input values to recognize driver cognitive state. This algorithm reduces the number of input variables to the DBNs and is expected to improve computational efficiency relative to the original DBN algorithm. At the same time, the layered algorithm preserves time dependency and ease of interpretation. The objective of this study is to demonstrate that the layered algorithm is an accurate, efficient, and interpretable approach to detect driver cognitive distraction, compared with the interpretable, but inefficient, DBNs and the uninterpretable, but efficient, SVMs.
نتیجه گیری انگلیسی
Based on the results, although the layered algorithm did not improve cognitive distraction detection accuracy, but it did significantly improved computational efficiency. The layered algorithm also provides useful insights concerning the effects of cognitive distraction on driver behavior, which have no equivalent in the SVM algorithm and other traditional statistical tests. This study demonstrated that data mining methods can identify human cognitive state from eye glance behavior and driving performance.