اثرات راتر در ارزیابی خلاقیت: بررسی روش مخلوط
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|32163||2015||13 صفحه PDF||سفارش دهید||11079 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Thinking Skills and Creativity, Volume 15, March 2015, Pages 13–25
Rater effects in assessment are defined as the idiosyncrasies that exist in rater behaviors and cognitive process. They are composed of two aspects: the analysis of raw rating and rater cognition. This study employed mixed methods research to examine the two aspects of rater effects in creativity assessment that relies on raters’ personal judgment. Quantitative data were collected from 2160 raw ratings made by 45 raters in three group and were analyzed by generalizability theory. Qualitative data were collected from raters’ explanation of rationales for rating and their answers for questions about rating process as well as from 12 in-depth interviews and both were analyzed by framing analysis. The results indicated that the dependability coefficients were low for all the three rater groups, which were further explained by the variations and inconsistencies in raters’ rating procedure, use of rating scales, and their beliefs about creativity.
Using human judges to score individual works or behaviors is not an uncommon measurement process in social sciences. Requiring teachers to score responses of constructed items in standardized tests is such an instance (Crisp, 2012). Other examples include counseling psychologists measuring high school students’ degree of pathology and intensity of violence; graduate students in social work program assigning scores to evaluate children's behaviors at their homes; principals observing classroom teaching and evaluating teachers’ performance. In creativity studies, researchers also rely heavily on raters’ judgment of the products generated from participants, including the ideas produced in divergent thinking tests, creative solutions to real world problems, and artifacts of creative writing and art (Author, 2014b). Research on creativity raters in recent years (e.g., Kaufman and Baer, 2012, Kaufman et al., 2005, Kaufman et al., 2009, Kaufman et al., 2004 and Kaufman et al., 2008) mostly focused on the influence of raters with different expertise on the results of assessment. However, this line of research does not shed light on the issue of rater effects (Hung, Chen, & Chen, 2012). The present research aims to fill this gap by employing mixed methods methodology to examine the rater effects in assessing the creativity of two science tasks. The examination of this issue is crucial because raters and their judgment are an indispensable part of the assessment. In addition, examining rater effects reveals the behaviors and cognitive process of raters during the assessment, which would further facilitate possible trainings for raters in the future, hence, help improve the assessment procedure.