دانلود مقاله ISI انگلیسی شماره 12018
ترجمه فارسی عنوان مقاله

آزمون دقت و صحت داده ها و اطلاعات، کارکنان : یک روش جایگزین ارزان قیمت بجای روش های سنتی

عنوان انگلیسی
Testing the accuracy of employee-reported data: An inexpensive alternative approach to traditional methods
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
12018 2008 11 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : European Journal of Operational Research, Volume 189, Issue 3, 16 September 2008, Pages 583–593

ترجمه کلمات کلیدی
تجزیه و تحلیل های دیجیتال - قانون بنفورد - صنعت پلاستیک - مدیریت عملیات - تشخیص تقلب ’
کلمات کلیدی انگلیسی
Digital analysis, Benford’s law, Plastics industry, Operations management, Fraud detection,
پیش نمایش مقاله
پیش نمایش مقاله  آزمون دقت و صحت داده ها و اطلاعات، کارکنان : یک روش جایگزین ارزان قیمت بجای روش های سنتی

چکیده انگلیسی

Although Information Technology (IT) solutions improve the collection and validation of operational data, Operations Managers must often rely on self-reported data from workers to make decisions. The problem with this data is that they are subject to intentional manipulation, thus reducing their suitability for decision-making. A method of identifying manipulated data, digital analysis, addresses this problem at low cost. In this paper, we demonstrate how one uses this method in real-world companies to validate self-reported data from line workers. The results of our study suggest that digital analysis estimates the accuracy of employee reported data in operations management, within limited contexts. These findings lead to improved operating performance by providing a tool for practitioners to exclude inaccurate information.

مقدمه انگلیسی

In 2001, we conducted a quality audit for a manufacturing firm to discover the source of an uncharacteristic increase in the number of customer returns. The firm employs Statistical Process Control (SPC) to track performance of its production process. After review of the SPC charts, we found no patterns that indicated the level of defects observed in the returned goods, thus suggesting a product design flaw or error in specifications, which were transmitted into the equipment setup and charts. However, further investigation indicated that an experienced line worker (machine operator) fabricated product weight data on the SPC chart, which was discovered after 10 days production were shipped, rejected, and returned from the customer. The operator failed to perform his required weight checks on production runs and thus randomly assigned fraudulent weights within the SPC limits in a pattern that appeared to be genuine. This meant that SPC did not detect the product defects. This single incident cost the company an estimated $300,000 (US) of which only $40,000 was recoverable. This issue presented an interesting problem to managers. Since managers rely on employee reported data to make decisions, how can they estimate the accuracy of the information without reinstituting traditional quality control inspection and sampling procedures for which they worked hard to replace? Additionally, since the company’s managers prided themselves on trusting employees, how could they ensure data accuracy without instilling a sense of distrust among machine operators? After all, this problem occurred with only one of 21 operators employed at this facility. If managers, in this example, had an inexpensive tool to validate the data reported by the dishonest operator, the problem could have been identified earlier. The purpose of this study is to provide operations managers with a tool to validate self-reported data from line workers, where the reports are the only source of information, or where secondary sources are difficult or expensive to obtain. The problem with employee reported data is that it provides the opportunity for individuals to manipulate the results, thus reducing its suitability for decision-making. Managers who do not have a secondary source for verifying the accuracy of employee reported data, may find decreased performance in activities and processes that rely on this information. Hence, we identified how other disciplines address this issue. We found that financial auditors commonly employ a method called digital analysis to identify suspect data. One purpose of this study is to apply this method in two companies and industries to evaluate its ability to detect fraudulent data in the context of operations. A second purpose is to extend the use of digital analysis to data types not considered in previous studies, i.e. to distributions previously considered inappropriate for digital analysis.

نتیجه گیری انگلیسی

This study demonstrates how to conduct digital analysis, and suggests potential benefits from its use in operations management. We qualified two types of manufacturing line measures as appropriate for digital analysis and, in doing so, found it could be extended to data types previously considered inappropriate. We found no previous study applying digital analysis in this manner. In addition, we used an experiment to demonstrate how to detect systematic problems and provide cost benefits to an adopting firm. A significant benefit of this application is that it is relatively easy and inexpensive to use, making it a preferable method even in the presence of other verification options. In addition, it can be performed without the knowledge or assistance of line workers who generate the data. Other techniques find this difficult to accomplish. In our example given in the introduction section, one may catch the fraudulent reports early, thus preventing a loss of customer goodwill and the scrapping of ten days production. However, following the lead of others, we propose its use as a tool for identifying suspect data, and not as final evidence of fraud or non-systematic error. Lastly, we propose that digital analysis be extended to data containing some degree of nominal values. We suggest that the minimum criteria is that at least one digits place must contain random values that are appropriate for digital analysis.