روش الگوبرداری گام به گام برای DMU های ناکارآمد بر اساس انتخاب هدف مبتنی بر نزدیکی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|1312||2009||10 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Expert Systems with Applications, Volume 36, Issue 9, November 2009, Pages 11595–11604
DEA is a useful nonparametric method of measuring the relative efficiency of a DMU and yielding a reference target for an inefficient DMU. However, it is very difficult for inefficient DMUs to be efficient by benchmarking a target DMU which has different input use. Identifying appropriate benchmarks based on the similarity of input endowment makes it easier for an inefficient DMU to imitate its target DMUs. But it is rare to find out a target DMU, which is both the most efficient and similar in input endowments, in real situation. Therefore, it is necessary to provide an optimal path to the most efficient DMU on the frontier through several times of a proximity-based target selection process. We propose a dynamic method of stepwise benchmarking for inefficient DMUs to improve their efficiency gradually. The empirical study is conducted to compare the performance between the proposed method and the prior methods with a dataset collected from Canadian Bank branches. The comparison result shows that the proposed method is very practical to obtain a gradual improvement for inefficient DMUs while it assures to reach frontier eventually.
Data Envelopment Analysis (DEA) is a mathematical programming formulation based technique that develops an efficient frontier to provide an estimate of relative efficiency for each decision making unit (DMU) in the problem set (Charnes, Cooper, & Rhodes, 1978). It is built around the concept of evaluating the efficiency of a DMU based on its performance of creating outputs by means of input consumption. A DMU is said to be relatively or Pareto efficient if no other DMU or combination of DMUs can improve one of its outputs without at the same time worsening its any other outputs or increasing at least one of its input levels. DEA can be used to determine whether a DMU is relatively efficient and then to yield a reference target for an inefficient DMU. However, it is very difficult for inefficient DMUs to be efficient when they have to benchmark a target DMU which has different input use. In real situation, many DMUs are competing with the other DMUs, which have similar input endowments. For example, a small and medium-sized company set a target not within the major company group, but within the small and medium-sized group for its competition. Gonzales and Alvarez (2001) also suggest that when a firm is informed that it is inefficient, a reasonable strategy for its target selection would be to select and benchmark the efficient firm that is most similar to its input use. In this study, we call this strategy a “proximity-based target selection” given the fact that proximity can be measured in terms of input use. This paper focuses on how to choose practical target DMUs for benchmarking based on the similarity of input use among the DMUs. The simplest proximity-based target selection strategy is to choose the closest DMU in its input use among the DMUs, which are on the frontier (i.e. efficient DMUs). However the selected target DMU still may be different in its input use and hard to be imitated. It is rare to find out a target DMU, which is both the most efficient and similar in input endowments in real situation. Therefore, it is necessary to develop a method, which helps inefficient DMUs improve their efficiency gradually over time and benchmark the most efficient DMU on the frontier eventually. In order to help inefficient DMUs improve their efficiency gradually, it is necessary to provide an optimal path to the most efficient DMU on the frontier through several times of a proximity-based target selection process. To make this idea operative, a stepwise benchmarking procedure for inefficient DMUs is proposed. To find out similar DMUs in its input use, we use Self-Organizing Map (SOM) which provides neighborhood information through clustering DMUs according to input use. Because this mapping tends to preserve the topological relationships of input data, we can easily find out neighbor DMUs, which have similar input use, on the SOM output map. The gradual approach for improving efficiency considers the closest neighbor DMUs in a SOM output map as the next candidate benchmarking DMU set. In finding an optimal path to the frontier, Reinforcement Learning (RL) algorithm is adopted. Through Reinforcement Learning algorithm, each inefficient DMU can learn an optimal path to reach to the frontier. This paper is organized as follows. Section 2 presents the previous studies for target selection and efficiency improvement. Section 3 introduces the general view of Data Envelopment Analysis, Self-Organizing Map and Reinforcement Learning and Section 4 defines the problem. Section 5 discusses our proposed method and Section 6 explains the details of our empirical study. Finally Section 7 summarizes our work and provides the future works.
نتیجه گیری انگلیسی
The main purpose of this study is to develop a method of stepwise benchmarking for inefficient DMUs based on proximity-based target selection. The simple proximity-based target selection makes it easier for inefficient DMUs to imitate its target DMUs but it does not guarantee for inefficient DMUs to reach the most efficient DMU on the frontier. Because it is rare to find out a target DMU with input endowments similar to that of an inefficient DMU. Therefore, we propose a method that helps inefficient DMUs improve their efficiency gradually through a proximity-based target selection strategy. We use DEA to determine the efficiency score of DMUs. We utilize Self-Organizing Map (SOM) to find out similar DMUs in its input use. Then, we adopt a Reinforcement Learning algorithm to find the optimal path to the frontier. To evaluate the proposed methodology, we conduct an empirical study with a Canadian Bank branches dataset. The comparison experiments with a basic DEA and a layer model prove that the proposed method can be a practical method to learn an optimal path to the frontier. For the further research issue, the dataset used in this study are from one large organization. Caution should be taken in generalizing the evaluation results of the proposed method to other organizations. Another direction of the future research is to develop a model combining a Multi-Criteria-Decision-Making (MCDM) method into Reinforcement Learning to solve the same problem. MCDM is a well-known approach to deal with multiple conflicting criteria. However, a simple MCDM approach does not consider interactions among decision outcomes when sequences of benchmarking decisions are made over time as Reinforcement Learning does. Therefore, the combined model will show different and interesting results compared to a simple MCDM approach or our model.