اشاعه دانش در مدیریت عملیات: برداشت های منتشرشده در مقابل واقعیت علمی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|11824||2011||12 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Omega, Volume 39, Issue 4, August 2011, Pages 435–446
The channels for knowledge generation and dissemination in the business disciplines are many: presenting research at conferences, writing books, distributing working papers, offering insights in society newsletters, giving invited talks, publishing studies in academic journals, and many other venues, including even blogs and perhaps Facebook®. But the most important venue is probably published research in “top-level” academic journals. In the discipline of Operations Management, many studies and lists have been published that attempt to determine which of these journals are supposedly the “top” according to either citation analyses, the opinion of recognized experts, author affiliations, bibliometric studies, and other approaches. These lists may then, in turn, be used in different degrees to evaluate research. However, what really counts is what the academic institutions actually use for guidance in evaluating faculty research. Based on a new source of ranking data from AACSB-accredited schools, we compare published journal-ranking studies against that of academe to determine the degree to which the studies reflect academic “reality”. We present rankings of OM journals based on this new source of data and on an aggregate of the stream of published studies, and evaluate their consistency.
The channels available for disseminating academic knowledge are extensive, ranging from informal “blogs” these days to formal presentations to published books, monographs, and articles in journals. In the field of business, the most important, arguably, are published articles in “top” journals (rather than grants, as might be the case in medicine or engineering) since these are frequently the most important basis for promotion and tenure (P&T) decisions. But beyond P&T and annual evaluations, such top publications are also often the basis for: • research awards by universities, scholarly societies, governmental academies, and journal publishers; • grants from federal, state, and private agencies such as the National Science Foundation; • nominations for high-profile chairs, professorships, research grants, and fellowships; • offers for joining, or perhaps taking joint professorships at prestigious universities such as Yale, Harvard, MIT, and such others; • candidacy for high-visibility governmental positions such as advisory positions to the President or Governor, secretarial positions with the state or federal organizations such as the Treasury, membership on the Federal Reserve or Council of Economic Advisors, etc. and • and even, at least partially, the requisite, ubiquitous business school/program rankings. Clearly, the role of top publications in each of these sets of decisions will be different, but their influence is often substantial. For example, annual evaluations may include aspects of teaching, service to the department (and school, university, discipline, and perhaps even local community), research in-progress, and publications, while P&T may be based much more heavily on publications. Yet, universities are aware of the legal complications that can arise when a faculty member receives outstanding annual evaluations only to be turned down for P&T some years later. Hence, for schools that wish to place an emphasis on top journal publications for P&T, the role of such publications must be heavily included in the annual evaluations as well  and . It is also clear that there are more stakeholders in top journal publications than just the university. Attaining a reputation as the “national expert” in a particular field or topic can lead to such accolades as being appointed Assistant Secretary of the Treasury, Economic Czar of the State, or Chair of the Federal Bankruptcy Committee for the XYZ Corporation. Such eminent appointments are a great boon to a university, enhancing its reputation, bringing in donations and endowments, increasing its student applications, and many other benefits. Hence, the university also has an interest in their faculty publishing in highly recognized journals, and may thus emphasize particular journals or fields over others. Understandably then, everyone from University Presidents to Deans to Assistant Professors are interested in knowing which journals the university considers to be the “top” journals, and especially the top journals in each Professor’s own field or discipline. To determine these top journals, and the rankings of other journals in a field, numerous authors have used various approaches such as citation impact scores, surveys of recognized scholars, bibliometric analyses, author affiliation indexes, and other techniques. These studies commonly identify the same general set of journals, but their ranking may differ considerably. In the discipline of interest here – operations management –Holsapple and Lee-Post  detail many of these approaches and identify examples of their use in operations management. However, with such a range of rankings derived by a variety of methods, it is not clear whose ranking to use, or even which method to rely upon. All of the methods have some justification, but also some weaknesses and limitations, again well described in Holsapple and Lee-Post  and Lewis . For that matter, it is not only the methods that confound the studies but a range of other factors as well such as the geographic region (e.g., US versus Europe), the time period considered (e.g., 4 years versus 25 years), the date of publication, the selection of scholars, and the set of journals under consideration. In this study, we analyze all the ranking studies of operations management journals since 1990, and then compare them to a new source of data—official in-house journal lists of AACSB-accredited business schools used for helping evaluate faculty publications. This is a source of data that has not been used previously in OM journal ranking studies. (Although we asked schools for journal lists “that are used for evaluating publications,” we cannot state how, or even if, they were actually used. Our research here informs those evaluating research, but does not report any criteria for that evaluation, such as “three top-level journal articles” or “six solid publications.”) Certainly not all schools formally and explicitly use in-house lists to evaluate research. However, schools often do create formal in-house lists for evaluation. Van Fleet et al. [5, p. 839] suggest that, “These rankings are designed to reduce difficulties in evaluating quality and to help faculty members identify target journals.” Additionally, Vokurka [6, p. 345] argues that, “The rankings of journals are important to academics because promotion and tenure decisions are based to a large extent on publication achievements. These decisions are based primarily on the journals in which research is published.” Our focus is on identifying the most credible ranking studies in the sense of conforming to the reality of academic guidelines provided by schools that have journal rating lists and then comment on these studies and their various approaches.
نتیجه گیری انگلیسی
This paper relied on two sources of data to evaluate academic journals in operations management, arguably the most important channel for scholarly knowledge generation and dissemination. Using published ranking studies, we categorized the studies by type, with most falling into the category of perception studies, a few being citation rankings, and two others, namely one that used an author–affiliation index  and another that used publication counts, a behavioral measure . A new source of data, the in-house target journal lists of AACSB-accredited schools, was introduced in this study. These lists are used by the universities for evaluation, promotion, tenure, and other such research-oriented appraisal activities, as well as by others with an interest in identifying experts in particular business areas. We have shown that both sources tend to be relatively consistent and reliable. Using the school lists and published studies, we identified 71 journals where operations management academics tend to publish. Starting with this list, we then separated, based on a variety of measures, the OM-dedicated journals from those representing either reference disciplines for the field (primarily engineering, economics, and operations research) or the many broader, interdisciplinary journals, resulting in a final list of 30 OM-dedicated journals. Then, based on the in-house AACSB data, we derived a weighted average mean percentile score by which we found how universities actually perceive the rankings of those journals dedicated to the operations management field and calculated the rankings of the 30 OM-dedicated journals. We similarly derived one overall ranking list for the same journals based on the published studies and compared the two lists, explaining the discrepancies and differences. We concluded that much of the difference between the two lists is accounted for by inertia effects due to time lags in both human perception and changes in the journal policies. As a result, the most up-to-date data should be relied upon more heavily, and of course, the reality of what the universities are actually using for their research-oriented decisions. We suggest that updates of our AACSB data will be important for monitoring changes in the stature and respect of the journals in the future. Our results should be of interest to university and business school administrators and faculty, as well as external stakeholders with an interest in identifying experts in areas such as quality, supply chains, scheduling, inventory management, process design, manufacturing management, and service management. To date, the great majority of published journal rankings for the operations management field have confounded these stakeholders by including inappropriate journals from either reference disciplines (e.g., engineering, economics, operations research, information systems, statistics) or from interdisciplinary journals that include marketing, finance, behavior, ethics, and other business, and sometimes non-business, fields. An ethicist who is well-published in an interdisciplinary journal such as Business Horizons, or a statistician who is well-published in a reference-discipline journal such as the Journal of Heuristics, may not have any competence (or, probably, interest) in operations management, or one of its sub-fields like scheduling. Here we have identified 30 OM-dedicated journals, some representing the entire discipline and others focused on individual sub-areas, and ranked them in terms of how AACSB-accredited universities actually perceive their quality. This list should be used in the future to identify those journals relevant to the operations management field, whether for purposes of comparing regional perceptions of quality or analyzing coverage of individual topics or any other particular interest of researchers in the field of operations management. And, of course, the list and rankings should be updated on a regular basis to stay current with the perceptions in the field.