دانلود مقاله ISI انگلیسی شماره 1342
ترجمه فارسی عنوان مقاله

ارزیابی عملکرد یادگیری با مفهوم الگوبرداری برای دوره های نگارش انگلیسی

عنوان انگلیسی
A learning performance evaluation with benchmarking concept for English writing courses
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
1342 2011 8 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Expert Systems with Applications, Volume 38, Issue 12, November–December 2011, Pages 14542–14549

ترجمه کلمات کلیدی
تجزیه و تحلیل پوششی داده ها () - نگارش انگلیسی - مجموعه مرجع وجوه - الگوبرداری عملکرد یادگیری
کلمات کلیدی انگلیسی
پیش نمایش مقاله
پیش نمایش مقاله  ارزیابی عملکرد یادگیری با مفهوم الگوبرداری برای دوره های نگارش انگلیسی

چکیده انگلیسی

This paper adopts data envelopment analysis (DEA), a robust and reliable evaluation method widely applied in various fields to explore the key indicators contributing to the learning performance of English freshmen writing courses in a university of Taiwan from the academic year 2004 to 2006. The results of DEA model applied in learning performance change our original viewpoint and reveal that some decision-making units (DMUs) with higher actual values of inputs and outputs have lower efficiency because the relative efficiency of each DMU is measured by their distance to the efficiency frontier. DMUs may refer to different facet reference sets according to their actual values located in lower or higher ranges. In the managerial strategy of educational field, the paper can encourage inefficient DMUs to always compare themselves with efficient DMUs in their range and make improvement little by little. The results of DEA model can also give clear indicators and the percentage of which input and output items to improve. The paper also demonstrates that the benchmarking characteristics of the DEA model can automatically segment all the DMUs into different levels based on the indicators fed into the performance evaluation mechanism. The efficient DMUs on the frontier curve can be considered as the boundaries of the classification which are systematically defined by the DEA model according to the statistic distribution.

مقدمه انگلیسی

According to the Ministry of Education in Taiwan (2009a), the number of students in higher education institutions (HEIs) has increased by about 23% from 915,921 to 1,123,755 from 1999 to 2009. During the past decade, higher education has become accessible to more people with the encouragement from the Ministry of Education. As a result, the number of colleges and universities has increased rapidly. However, the government budget has not increased as much as the number of academic institutions. Many of these institutions struggle to obtain public financial support and need to find additional financial support. Moreover, even if higher education is now universal, Taiwan has the lowest birth rate in the world: 0.83 according to the Ministry of Interior (2010). The birth rate has dropped dramatically during the past 4 years certainly due to the global economic downturn. If this trend continues, the number of students in Taiwan will inevitably decrease in the coming years. The percentage of children under 15 years of age has already dropped from 20.80% in 2001 to 16.95% in 2008 (Ministry of Education, 2009b) and the number of elementary school children has dropped from 1,927,179 during 1999–2000 to 1,677,303 during 2008–2009 (Ministry of Education, 2001 and Ministry of Education, 2009a). In order to distribute limited resources more fairly and close less competitive institutions, the Higher Education Evaluation and Accreditation Council of Taiwan has initiated a system of performance evaluation based on self-evaluation reports and accompanied by peer reviews or site visits (Ministry of Education, 2009a). Higher education institutions need to maintain a certain level of quality and become as efficient and attractive as possible in order to avoid low student enrollment, high graduate unemployment, credential inflation, and even closure. The performance evaluation by the Ministry of Education is now a universal practice undertaken every four years in Taiwan. Every academic institution also performs self-evaluations concerning teachers’ research results and student’s learning performance. This paper adopts data envelopment analysis (DEA), a robust and reliable evaluation method widely applied in various fields to explore the key indicators contributing to the learning performance of English freshmen writing courses in a university of Taiwan from the academic year 2004 to 2006. Taiwan export-based economy makes English an indispensable communication tool and a valuable skill for the students who expect to enter the job market. The paper aims at not only calculating the quantitative efficiency of DMUs (evaluated units), but also indicating the room for improvement in what aspect for the inefficient DMUs. Through the benchmarking characteristics of DEA, the inefficient DMUs can have more objective indication and endeavor to make progress step by step. According to McGowan and Graham (2009), the four factors contributing most to improved teaching are active/practical learning (real-world experiences and in-class discussions), teacher/student interactions (knowing students personally), clear expectations/learning outcomes, and faculty preparation. By comparison in our paper, the following four factors are classified into two inputs and two outputs: the preparation of teaching contents and teaching skills for the inputs and fair grading and students’ learning performance for the outputs. Active/practical learning may correspond to our teaching skills and faculty preparation to our preparation of teaching contents. However, contrary to McGowan and Graham (2009), the paper shows that fair grading and students’ learning performance are the major factors contributing to students’ learning performance and teachers’ improved teaching. Dickinson (1990) made a study in an American public school by using Mitzel’s, 1982 criterion for classifying the rating questions. Students were asked about instructors’ knowledge, presentation, overall evaluation, presentation of clear objectives, and about students’ estimate of amount learned. His paper determined the relationship between learning and the students’ ratings of the instruction by using Pearson correlation coefficients. The findings pointed out the problems with using pupils’ ratings of teachers especially if the ratings were to be used as part of a teacher’s evaluation for merit pay, promotion, or tenure if amount learned was used as the criterion of effective teaching: according to Chacko, 1983, Arreola, 1986 and Dickinson, 1990, and students generally gave their teachers high marks even in the face of low learning. Marsh and Roche (1997) showed that student evaluation could result in improved teaching effectiveness and that end-of-term feedback is more effective than midterm feedback. The validity of student evaluation is demonstrated by validation of student ratings, student achievement, teacher self-evaluation, and observation by trained evaluators. But not all scholars acknowledge the value of evaluating faculty performance. Ryan, Anderson, and Birchler (1980) showed that 38% of professors admitted to making their courses easier in response to student evaluation. Boland and Sims (1988) criticized the process as subjective, inconsistent, punitive, and sporadic. They showed that evaluation criteria were inconsistently written and that performance expectations were inexplicit or poorly communicated. Haskell (1997) analyzed the impact of student evaluation of faculty performance on academic freedom and argued that the pressure to respond to students’ needs could cause the problems regarding retention, promotion, tenure, and salary increases. Students’ ratings of teachers are part of the performance evaluation. Even though these ratings are not considered by some scholars to be very objective, however, they provide useful information and give an opportunity to students to express their feelings and what they think about their teachers and their courses. It should only be used as a reference, but with great care and fairness. Moreover, the questions asked to the students can be improved in order to make the results more accurate and objective. When the ratings are unfair and do not reflect the reality, it will make teachers feel bitter and want to give up. When the ratings are accurate and fair, it can help teachers better understand their students and their needs, leading to improved teaching methods. Data envelopment analysis (DEA) has been used by various studies to analyze students’ learning performance. Ahn, Arnold, Charnes, and Cooper (1989) have assessed the efficiency of US universities during 1981–1985 and Glass, Mckillop, and O’Roruke (1998) the efficiency of UK universities during 1989–1992. Other studies have measured efficiency at the departmental level: Madden, Savage, and Kemp (1997) measured the performance of economics departments in Australian universities; Johnes and Johnes (1993) the performance of economics departments in the UK during 1984–1988; Colbert, Levary, and Shaner (2000) the performance of MBA programs in the US. Fu and Huang (2009) conducted a survey of college graduates in 2003 and used an output-oriented BCC type of DEA model to provide information for students and school administrators. Lin (2009) developed an evaluation approach for measuring and ranking the efficiency of tutors in some higher education institutions (HEIs) in Taiwan. As many of these institutions have established committees to evaluate the performance of tutors and reward the outstanding tutors each academic year, Lin (2009) proposed to use an IDEA (imprecise data envelopment analysis) model based on the BCC model to help determining the final ranking of outstanding DMUs, that is, the evaluated tutors. In the field of Teaching English as a Second Language (TESL), Barcelos and Kalaja (2003) demonstrated that teachers’ and students’ beliefs about second language acquisition are experiential, dynamic, socially constructed, and changeable. Hsu (2010) proposed a web-based interactive speaking improvement system for English as a Second Language learner by using fuzzy matching. According to this study, the system could effectively help students speak English more correctly and it could be adapted for improving the speaking ability of learners of other foreign languages than English. Clinton and Kohlmeyer (2005) have investigated the effect of group quizzes on accounting students’ performance and motivation to learn. They found out that students in the group-condition had significantly different affective reactions than those in the non-group condition. Moreover, the overall performance of the instructor was rated higher for the group quizzes sections and the students expressed greater motivation to learn and increased enthusiasm even if they did not evidence any significantly different performance results. The remainder of the paper is organized as follows: the methodology and choice of key indicators explain the DEA method and the important input and output items discussed in the paper. The following section presents the obtained numerical results and recommendations. The final paragraph draws the conclusions and implications.

نتیجه گیری انگلیسی

The paper uses DEA, a reliable and robust evaluation method to explore the key indicators contributing to students’ learning performance for English freshmen writing courses in a university of Taiwan. The paper aims at providing DMUs with objective and impartial measuring indices under limited teaching and learning resources. We observe that 10% of the DMUs have an overall technical efficiency value (CCR score) of 1 and do not need any improvement in the inputs or in the outputs because they have reached their optimal state, meaning that both teachers and students feel at ease and are motivated to work. This state requires a good atmosphere in the class and good teaching preparation and skills from the part of teachers. Students have to show their efforts during the training and accept criticism. However, if they are over criticized by their teachers, they will probably loose their motivation. Thirty-nine inefficient DMUs have the pure technical efficiency values (BCC score) smaller than 1. Among which, six have pure technical efficiency values greater than scale efficiency values: students are unable to assimilate the entire contents of the course. The other 33 inefficient DMUs have pure technical efficiency values smaller than scale efficiency values: some students think that their teacher has not enough professional knowledge and experience to teach a course of English writing. The results of the DEA model applied in learning performance change our original viewpoint and reveal that some DMUs with higher actual values of inputs and outputs have lower efficiency because the relative efficiency of each DMU is measured by their distance to the efficiency frontier. DMUs may refer to different facet reference sets according to their actual values located in lower or higher ranges. In the managerial strategy of educational field, the paper can encourage the inefficient DMUs to always compare themselves with the efficient DMUs in their range and to make improvement little by little. The results of the DEA model can also give clear indicators and the percentage of which input and output items to improve. The paper demonstrates that the benchmarking characteristics of the DEA model can automatically segment all the DMUs into different levels based on their actual values of input and output items. Moreover, the efficient DMUs on the frontier curve can be considered as the boundaries of the classification which are systematically defined by the DEA model according to the statistic distribution of all the DMUs and may change according to the indicators fed into the performance evaluation mechanism. This concrete classification and the evaluation approach of learning performance can be employed to the field of foreign language learning as well as other branches of learning or even to corporate employee training.