پیشرفت های با محوریت طبقه بندی الگو در سیستم های پشتیبانی تصمیم گیری حلقه انسانی
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|5505||2011||19 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 7961 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Decision Support Systems, Volume 50, Issue 2, January 2011, Pages 460–468
Data mining has been a key technology in the warranty sector for mass manufacturers to understand and improve product quality, reliability and durability. Cost savings is an important aspect of business which calls for processes that are error proof. Pattern classification methods applied to the diagnostic data could help build error proof processes by improving the diagnostic technology. In this paper we present a case study from the automotive warranty and service domain involving a human-in-the-loop decision support system (HIL-DSS). The automotive manufacturers offer warranties on products, made of parts from different suppliers, and rely on a dealer network to assess warranty claims. The dealers use diagnostic equipment manufactured by third parties and also draw on their own expertise. In addition, a subject matter expert (SME) assesses these collective decisions to distinguish between inaccurate diagnoses by the dealers or an inadequate decision algorithm in the diagnostic equipment. Altogether this makes a comprehensive HIL-DSS. The proposed methodology continuously learns from collective decision making systems, enhances the diagnostic equipment, adds to the knowledge of dealers and minimizes the SME involvement in the review process of the overall system. Improving the diagnostic equipment helps in better warranty servicing, whereas improvements in the human expert knowledge help prevent field error and avoid customer dissatisfaction due to improper fault diagnosis.
There are different kinds of decision support systems (DSS) — model driven, communication driven, data driven, document driven and knowledge driven . Knowledge is gathered over time. It plays a crucial role in the decision making process. There are many ways of designing the DSS, namely — to incorporate all the possible factors under consideration which in real life can make the system very complicated. Realistically, only quantifiable factors can be easily incorporated into the DSS and there is less possibility for the DSS to learn new information and update automatically. This suggests the DSS' limitations and incompleteness. Human experts on the other hand, can learn and use the knowledge gained to make decisions even based on factors that cannot be easily incorporated algorithmically. For instance, let us consider a scenario where human experts use limited/incomplete DSS to make actionable decisions but not blindly follow them, i.e. the humans use their expertise on top of the DSS to make the final decision. This limited/incomplete DSS plus the human expertise (which can be a representation of a knowledge driven DSS) forms a comprehensive system that we call a human-in-the-loop decision support system (HIL-DSS). Decisions made this way are quite common in real life setup, e.g. warranty and service domain and air traffic management . In this paper, we will demonstrate the enhancements to the HIL-DSS using the warranty and service domain as an example. The warranty space is a complex and sensitive structure in the view of the manufacturer because it needs special handling to take into account the customer priority first and still maintain profitability. In such a case, there are several factors that need to be considered while designing the DSS. As an illustration, if we consider the automotive warranty domain, manufacturers define service procedures, provide diagnostic testing tools and train the dealers to provide service support to the customers. In modern vehicles, many diagnostic sensors are also inbuilt in the car to comply with government regulations . These regulations are focused towards customer safety and environmental concerns. For parts like battery, air conditioning system and others, manufacturers depend either on built-in sensors  or on commercial testing equipment. For the latter, manufacturers choose testers  and  that satisfy their criteria and provide requirements to the tester-OEM (original equipment manufacturer) to adapt them to suit their needs. These testers are then deployed in the field and dealer technicians are instructed to take appropriate action as per the tester outcomes. In the process, diagnostic data measured by the testers pertaining to repairs performed by the dealer technicians are mandatorily collected as part of the warranty claims. With high warranty costs, the manufacturers are apprehensive about many of the claims but they are restrained towards actions. The reason is two-fold — 1) the nature and large volume of claims that they cannot verify; and 2) the lack of adequate proof to back their apprehension. The incorrect/incomplete data provided by the dealers towards the claims usually adds to the confusion. More importantly, the manufacturer too may not know about all the factors that need to be collected in order to assess the correctness of the diagnosis and take subsequent action on every claim. Needless to say, understanding the completeness of the data collection is a continuous learning process. Fig. 1, shows the current HIL-DSS used in the automotive warranty space. The various components involved in the HIL-DSS are described as follows: •Scenario: The vehicle encounters a problem that requires it to be brought to the dealer for repair. A “scenario” is a representation of the failure. •DSS: The diagnostic tester is the DSS used by the dealer for the assessment of the failed component in a given scenario. •Human expert: The dealer technician, or the human expert, takes the liberty to take appropriate action based on his/her experience and does not rely entirely on the tester outcome. Although, the decisions through the human expertise and knowledge are expected to enhance the DSS, it also leaves an opportunity for field errors to prevail. • Subject matter expert: Humans with in-depth knowledge of the domain. These subject matter experts (SMEs) are capable of reviewing decisions made by the DSS or the human expert and recommend enhancements/improvements in the two (human knowledge or diagnostic tester algorithm).
نتیجه گیری انگلیسی
Decision support systems with human-in-the-loop (HIL-DSS) are quite common. There are two essential components of such systems — the algorithmic component and the human expert. The algorithmic component is usually well-defined and well-designed based on the requirements of the DSS. The human involvement enables the decisions to be based on experience and also factors which are non-quantifiable that cannot be easily implemented algorithmically. The algorithmic component is limited/incomplete in the absence of these human-based factors. Human involvement however, also brings in fuzziness into the system due to (un)intentional errors from humans. The HIL-DSS that we consider in this work is complex on both the fronts — the algorithm behind the algorithmic component is not known (proprietary reasons) and the human expert also introduces uncertainty into the system. The absence of the DSS algorithm makes the understanding of the DSS outcomes difficult. We present a novel approach based on pattern-classification to — 1) continuously learn from collective decision making systems; 2) enhance the limited/incomplete DSS; 3) add to the knowledge of human expertise; and 4) minimize the human involvement in the review process of the overall system. This necessitates a completely data-driven bottom-up approach for this purpose. Based on data availability, data mining techniques can be used to learn the DSS algorithm at a high level which also has the features of the human decisions inculcated in the learning. Following are the salient features of the proposed methodology: Agreement/disagreement data — With the availability of data on both the DSS decisions and the human decisions a simple Set-Intersection based on the decisions can divide the data into agreement and disagreement data. Classifier selection — We learn the comprehensive model using two different pattern classification techniques namely, Decision trees and Support Vector Machines. These were chosen based on the nature of our data we used for our analysis. We assess the learnt model by studying its classification accuracy both by data-splitting of the agreement data and also using cross-validation. Testing the learnt model — The model learnt using the agreement data and is applied to the disagreement data to determine those scenarios that either agree with the human decisions or the DSS decisions. They are complementary because you test on one of the either two decisions. Identification of field errors and DSS improvement areas — The disagreement between the learnt model and the field decisions indicates potentially misdiagnosed scenarios which need further investigation. The disagreement between the learnt model and the tester decisions flags scenarios that need to be analyzed for required changes/updates to the diagnostic algorithm of the tester. Boosted classifier outputs — The disagreement between the learnt model and the human/DSS decisions that have been isolated using the confusion matrices for each of the classifiers are fused using Set-Intersection to obtain boosted classifier outputs with higher confidence. Subject matter expert review and feedback — These boosted classifier outputs are further reviewed by the subject matter experts and recommendations are made on the enhancements. These enhancements, in the context of the case study presented, can be addition of new features to the diagnostic tester (for example charging/repair time which influences customer satisfaction) or revision of the service procedure to be followed by the technician. Feedback from the subject matter expert is expected to continuously reduce the number of disagreement scenarios and hence will make the system robust. Incorporation of non-quantifiable factors — Some of the non-quantifiable factors can be quantified by adding features to the diagnostic tester. For example, inclusion of charging time estimate helps explain the decisions made in the field, based on customer satisfaction. To summarize, by using the past data and the pattern classification approach we have identified potential areas of improvement in the DSS and the human expert knowledge. In addition, we have also significantly reduced the number of scenarios to be reviewed by the subject matter expert, thereby making the HIL-DSS more efficient. For the case study presented, we have managed to limit the number of scenarios to be reviewed by the subject matter expert from 5417 to 190 (= 73 + 110 + 7; 96.49%reduction) in brand B1 dataset and from 24, 312 to 756 (= 192 + 527 + 37; 96.89% reduction) in brand B2 dataset (Table 4). However, there are certain limitations in our approach — 1) There is an inherent assumption that the data is of good quality and has not been manipulated or skewed. It needs to be validated in the presence of noise; 2) The approach does not work for data with missing values. These scenarios are simply ignored; 3) There is some subjectivity involved due to the SME. This cannot be removed; and 4) The approach only handles binary decisions (as against nominal decisions). We plan to address these limitations in future.