معیار ارزیابی انتشار پژوهش های OM در مجلات
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|11815||2012||20 صفحه PDF||سفارش دهید||17544 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Operations Management, Volume 30, Issues 1–2, January 2012, Pages 24–43
This paper investigates impact factor as a metric for ranking the quality of journal outlets for operations management (OM) research. We review all prior studies that assessed journal outlets for OM research and compare all previous OM journal quality rankings to rankings based on impact factors. We find that rankings based on impact factors that use data from different time periods are highly correlated and provide similar rankings of journals using either two-year or five-year assessment periods, either with or without self-citations. However, some individual journals have large rank changes using different impact factor specifications. We also find that OM journal rankings based on impact factors are only moderately correlated with journal quality rankings previously determined using other methods, and the agreement among these other methods in ranking the quality of OM journals is relatively modest. Thus, impact factor rankings alone are not a replacement for the assessment methods used in previous studies, but rather they evaluate OM journals from another perspective.
This paper investigates impact factor ( Garfield, 2006) as a metric for ranking the quality of journal outlets for operations management (OM) research and compares journal rankings based on impact factors to those previously determined using other methods. Recent editorials from the Journal of Operations Management ( Boyer and Swink, 2008 and Boyer and Swink, 2009) and Operations Research ( Simchi-Levi, 2009) highlight the increasing use of impact factors in assessing scholarly contributions of journals where OM research is published. In addition, Nisonger (2004) discusses the increasing use of impact factors for selecting and deselecting journals from a library's collection, and Cameron ( Cameron, 2005, pp. 105) notes that impact factors are increasingly being used as a “performance measure by which scientists and faculty are ranked, promoted, and funded.” Some editors compare their journal's impact factor to others within a discipline ( Boyer and Swink, 2008 and Boyer and Swink, 2009), and when its impact factor slips concern may be expressed ( Simchi-Levi, 2009). Simchi-Levi ( Simchi-Levi, 2009, pp. 2) expresses an ambivalent view towards using impact factor to assess journals: “Clearly, the value of the impact factor as a single measure of quality is fairly limited. Nevertheless, it is used by libraries, funding agencies, and deans of schools and university administrators as a factor in promotion and tenure decision, and therefore cannot be ignored.” Given this increasing use of impact factors, the need to investigate the appropriateness of using this metric to assess the quality of journals where OM research is published is clear. Specifically, we review all 14 prior studies that identified, rated, and/or ranked journals where OM research is published, and compare the assessments of quality in those studies with the results from using impact factors. As Thomson Reuters has expanded the coverage of these journals in its Journal Citation Reports® (JCR), it has become easier to determine and compare impact factors. Therefore, if impact factors provide comprehensive measures of quality, their use could substantially reduce the need for the time-consuming task of manually extracting citation data and/or developing a survey instrument and fielding it to assess journal quality. The next section provides background on citation analysis and impact factor. Section 3 reviews prior studies that identified, rated, and/or ranked OM journals. Section 4 presents our hypotheses, while Section 5 describes the methods we used to select OM journals, collect impact factor and citation data, determine impact factors, and test the hypotheses. Section 6 describes our results, Section 7 discusses our findings, and Section 8 presents concluding remarks.
نتیجه گیری انگلیسی
Our results raise questions without easy answers for how we should consider impact factors in assessing quality of journals that publish OM research. In most universities, faculty evaluations for promotion or tenure include review and assessment by faculty outside of our field, and as presented at the beginning of this paper there is some perception of an increasing tendency to use impact factors as a measure of journal quality for such reviews. Our results show that impact factors, particularly those based on a five-year time window, can provide valuable insights about OM journals, but these impact factors do not replicate OM journal assessments that have been conducted using other methods. Our results raise questions about how to deal with the increasing use of impact factors for assessing the perceived quality of journals in faculty evaluations if the OM journal rankings from using impact factor differ from other assessments of OM journals. The implications of our results for faculty development and promotion are significant. Our findings suggest impact factors provide useful information about journals, but that a balanced approach which also considers such other factors as an individual's academic progress and focus is appropriate for OM. As Table 10 shows, the agreement among different OM journal ranking studies is relatively modest, regardless of what method is used to rank journals. In fact, many of the previous OM journal ranking studies (Gorman and Kanet, 2005, Olson, 2005, Theoharakis et al., 2007 and Holsapple and Lee-Post, 2010) conclude that there is no single measure or sole method for judging quality of OM journals, and we concur with this useful advice. Our overall conclusion is that impact factors are useful metrics to rank OM journals, but impact factors do not replace other methods that are used to rank journal quality. Based on our comparison of impact factor results with those from other ranking approaches, impact factors are not a replacement for surveys used to assess perceived quality of OM journals or citation-based or author-based methods. Rather impact factors address journal quality from another perspective and can be usefully applied along with these other methods to rank OM journals and to develop journal target lists for promotion and tenure committees. With their ease of use and ever-increasing presence in academia, impact factors are likely to shape and influence the future perception of OM journal quality. Like it or not, the impact factor is here to stay, and we need to learn to use it wisely.