برآورد بهره وری مدرسه: مقایسه روش های استفاده از داده شبیه سازی شده
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|12991||2001||13 صفحه PDF||سفارش دهید||8414 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Economics of Education Review, Volume 20, Issue 5, October 2001, Pages 417–429
Developing measures of school performance is crucial for performance-based school reform efforts. One approach to developing such measures is to apply econometric and linear programming techniques that have been developed to measure productive efficiency. This study uses simulated data to assess the adequacy of two such methods, Data Envelopment Analysis (DEA) and Corrected Ordinary Least Squares (COLS), for the purposes of performance-based school reform. Our results suggest that in complex data sets typical of education contexts simple versions of DEA and COLS do not provide adequate measures of efficiency. In data sets simulated to contain both measurement error and endogeneity, rank correlations between efficiency estimates and true efficiency values range from 0.104 to 0.240. In none of these data sets were either DEA and COLS able to place more than 31% of schools in their true performance quintile.
Performance-based school reform has received much attention in recent years. Key elements of this reform movement include setting standards of student, teacher and school performance, granting autonomy to local actors in the educational process, and establishing rewards for high performance and remedies for low performance. These elements are prominently featured in the 1994 reauthorization of the federal Title I program as well as several state level reform initiatives.1 These reforms have been advanced as a remedy for several perceived problems with existing public education systems. Prominent among these perceived problems are the lack of incentives and the lack of knowledge about how to improve student performance. Some have argued that given current systems for determining compensation, professional advancement and school funding, the incentives of school officials are insufficiently linked to student performance (Hanushek, 1994 and Levin, 1997). Performance-based school reform attempts to provide stronger incentives for improving student performance by developing measures of achievement and tying financial and other rewards to those measures. Some also believe that we know very little about how to manage classrooms, schools and districts in ways that consistently result in higher levels of student achievement. By granting local actors the autonomy to experiment with new approaches and providing the means to assess the impact of local experiments on student performance, performance-based school reform is seen as a way to learn how to meet the ever-increasing demands placed on our public education systems (Hanushek, 1994). Developing valid and reliable measures of school performance is crucial both for efforts to establish incentives and to assess management practices. There is a growing consensus that measures of school performance should be based on the performance of students in the school. However, there is also recognition that any measure of school performance that is based on the performance of students needs to account for the differences in resources available to and service delivery environments faced by different schools. One approach to developing measures of school performance is to apply the conceptions of productive efficiency, and techniques for measuring it, that have been developed in the fields of economics and operations research. Several such techniques or methods have been developed, and several have been applied to estimate the efficiency of educational organizations. These include econometric approaches that utilize ordinary least squares regression and stochastic frontier estimation as well as a group of linear programming approaches falling under the rubric of Data Envelopment Analysis (DEA). Bessent and Bessent (1980), Bessent, Bessent, Kennington and Reagan (1982) and Bessent, Bessent, Charnes, Cooper and Thorogood (1983) have applied the basic formulation of DEA developed by Charnes, Cooper and Rhodes (1978) to schools in Houston. Färe, Grosskopf and Weber (1989) have applied a version of DEA that allows for variable returns to scale to school districts in Missouri. More recently, Ray (1991), McCarty and Yaisawarng (1993), Ruggiero, Duncombe and Miner (1995) and Kirjavainen and Loikkanen (1998) have applied DEA-based approaches that attempt to control for the different environmental factors faced by educational organizations. Johnes and Johnes (1995) have used DEA to investigate the technical efficiency of university departments of economics. Barrow (1991), Deller and Rudnicki (1993) and Cooper and Cohn (1997) have applied the stochastic frontier estimation methods developed by Aigner, Lovell and Schmidt (1977) to estimate the efficiency of districts, schools and classes. Stiefel, Schwartz and Rubenstein (1999) reviews the various methods available for measuring efficiency and explains how they can be implemented in programs designed to improve school performance measurement. The availability of these methods for estimating school efficiency raises two questions. The first is whether or not the methods provide accurate estimates of efficiency. The second question is, if there are multiple methods of measuring efficiency that may perform differently, which method is best to use. Studies that have applied different methods to the same data have found that they provide different results (Banker, Conrad & Strauss, 1985 and Nelson & Waldman, 1986). The problem is that without knowing the true efficiency of the organizations studied, there is no way to determine which measures provide better estimates. Studies that use simulated data with specified, and thus known, technological relationships and levels of efficiency can help to answer these questions. A limited number of such studies have been conducted. However, no attempt has been made to use the results of such simulation studies to assess how appropriate existing efficiency measures are for the purposes of performance-based school reform. This paper is intended to fill this gap in the literature. Section 2 identifies the specific set of challenges that the educational production process poses for methods of estimating school efficiency. Section 3 reviews existing studies that have used simulated data to evaluate methods of estimating organizational efficiency, and determines what these studies imply for the estimation of school efficiency. Section 4 describes a simulation study that we conducted. Section 5 presents an analysis of how well two methods, the Charnes et al. (1978) version of DEA and Corrected Ordinary Least Squares, did in estimating the known efficiencies of the simulated schools. Section 6 offers concluding remarks concerning the current state-of-the-art in measuring school performance and the implications this has for performance-based school reform efforts.
نتیجه گیری انگلیسی
The results of our simulations confirm the primary findings of Gong and Sickles (1992) and Banker et al. (1993), and suggest that the findings of these researchers can be generalized to cases with multiple outputs. First, the presence of correlation between inputs and inefficiency diminishes the performance of COLS estimates of production frontiers. Second, the performance of both DEA and COLS is negatively affected by the presence of measurement error. However, when assumptions about the distribution of measurement error and inefficiency made by COLS are close to the actual distributions, the performance of COLS is less effected by measurement error than the performance of DEA. We also found that for data sets characterized by the presence of endogeneity and measurement error, the bias of DEA efficiency estimates tend to be positive while the bias of COLS efficiency estimates tend to be negative. We further found that in such cases, averaging DEA and COLS estimates can provide less biased and more accurate measures of efficiency. However, averaged estimates do not appear to provide more useful school rankings. Most importantly our results suggest that in complex data sets typically used in educational research, i.e. data sets characterized by substantial measurement error and endogeneity, simple versions of DEA and COLS do not provide adequate measures of efficiency. It would be difficult to defend implementing performance-based financing or management programs with estimates of school performance whose rank correlation with true performance is no higher than 0.24, and where no more than 31% of schools are placed in the correct performance quintile. However, our results need not be interpreted with unequivocal gloom. Not only must our findings be properly qualified, but they also suggest strategies for developing more adequate measures of efficiency. Both DEA and COLS performed poorly in our simulations because of an inability to separate inefficiency from measurement error. The random errors used in our study were in fact quite large. In all cases, the random error term had a higher correlation with the level of output than the efficiency term and each of the three inputs. Whether educational data is actually characterized by this much measurement error is unknown. If actual amounts of measurement error are smaller, DEA and COLS might perform better. Efforts are being made to reduce the amount of measurement error characteristic of current educational production analyses. The Title I reauthorization provided substantial amounts of funding to state educational agencies to develop testing programs that are aligned with explicit curricular goals, that test higher level thinking skills and that can be used for purposes of evaluating school performance. States, such as Kentucky, are leading the way in the development of such assessment systems. In addition, several city school districts, including Chicago and New York City, have developed school-based budgeting systems. These systems provide more reliable school level resource data than has ever before been available. In addition to reducing measurement error, it might be possible to modify existing methods of estimating efficiency so as to minimize the effect of measurement error and/or endogeneity. For instance, the fact that the performance of COLS is diminished by correlation between inputs and inefficiency is not surprising. This type of correlation violates the assumptions that are required if ordinary least squares is to provide unbiased coefficient estimates. Bias in these coefficient estimates is the source of the poor performance of COLS in estimating efficiency. There are, however, well known simultaneous equation methods, such as two-staged least squares, that provide unbiased coefficient estimates in cases where the assumptions of ordinary least squares are violated. If such methods could be used to estimate production frontiers, then efficiency estimates that perform better than those we have examined might be developed. Of course, one might doubt whether such methods could provide improved measures of efficiency. The results we found for COLS depend on the fact that the assumptions made about the mathematical form of the production function, the relative importance of different outputs and the distributions of inefficiency and random error were well matched with the actual forms and distributions. In practical circumstances, we can never know if these assumptions match reality. Thus, it might be more fruitful to search for ways to reduce the impact of measurement error on the efficiency estimates provided by DEA. Finally, it may be possible to augment quantitative measures of efficiency with qualitative forms of evaluation to develop more reliable measures of school performance. Such qualitative forms of evaluation might involve site visits and audits by professional peers. It is not, however, immediately obvious how information from these different types of evaluation methods can be usefully combined. Efficiency estimates might be used to identify schools with potential problems, and therefore worthy of on-site investigation. Goldstein (1997), who cautions against relying solely on student test data to evaluate schools, suggests that this might be an appropriate use of student performance analyses. However, given the large errors in rankings found in this study, efficiency measures might not be adequate for even this limited purpose.11 Perhaps a more fruitful use of qualitative investigations would be to develop more accurate measures of important inputs and outputs in the production processes. Of course, conducting such analyses at each school might be prohibitively expensive, thereby limiting the usefulness of any improved measures for making system-wide comparisons. Research is needed to determine exactly how information acquired through site visits, peer reviews and other evaluative methods can be combined with existing data and methods to develop more reliable and valid measures of school performance. Given the data that are currently available, however, our results suggest that the methods for measuring the efficiency of educational organizations that have been used most frequently, are inadequate for use in implementing performance-based management systems. This is a discouraging result, and raises questions about the feasibility of some performance-based school reforms.