Using an online survey, we asked safety researchers around the globe how they perceived the quality of a list of 35 representative safety journals. We found that the most well-respected journal by expert opinion was the Journal of Loss Prevention in the Process Industries. However, taking both the respondents’ results and the citation-based results into consideration, the Journal of Hazardous Materials is the most influential journal, followed by Reliability Engineering and System Safety, Risk Analysis, Accident Analysis and Prevention and Safety Science.
Many academic journals exist and it can be quite difficult to gain notion of the relative quality of any journal compared with other journals in one research field. This is certainly also the case in the field of safety research. Many safety-related journals are available and publishing in one journal may be regarded as more important by peers, or may have a higher research impact, than publishing in another journal. To help authors and readers of state-of-the-art safety research and recent safety studies to decide which journal to publish in or simply to read, a variety of journal quality assessment methods have been developed. The most well-known method is undoubtedly the so-called ‘journal impact factor’ (or ISI impact factor) published by Thomson Scientific. The ISI impact factor is a quantitative instrument to evaluate scientific journals, determined by the average number of citations to an article published during the 2 years preceding the year in which the impact factor is being calculated. The more articles from a certain journal are cited, the higher its impact factor. It is common knowledge in the scientific research community that journals with high impact factors are perceived as more important than those with lower or no impact factors. Moreover, this performance measure is also regularly employed by universities, public and private research foundations, and various institutions to assess researchers, research projects, research proposals (and their teams), etc. Hence, publishing in scientific journals with high impact factors is, amongst others, important for the esteem and making promotion in the academic world, as well as in some industrial settings, and it is also essential for decision-makers deciding about research funding. The importance of the latter factor may be reflected into the knowledge that decisions affecting hundreds of millions of euros for research purposes worldwide at least partially depend on impact factor assessments. Therefore, it is an interesting exercise to compare the impact factor ranking with other measurement methods and evaluate whether the impact factor is an adequate proxy of journal quality.
Several studies concerning the use and the design of impact factors, their improvement and conceptual modeling have been performed. The reader is for example referred to Yue and Wilson, 2004, Moed, 2005, Frandsen et al., 2006 and Kodrzycki and Yu, 2006, and Egghe et al. (2007).
Overall, it should be noted that assessing and ranking journals is a difficult task, since journal quality is composed of different domains. Roughly, either the number of citations (as mentioned in the paragraph before), or expert perceptions, are used to rank journals. This paper investigates and compares these two types of ranking. Such an exercise is interesting, since authors’ expert opinions may be different from the ‘generally accepted’ citation-based assessments to take decisions for evaluating researchers, authors, projects, etc. After all, assessments might not only want to take objective output-related concepts such as the volume and intensity of citations into account, but also subjective opinion-related factors (Rousseau, 2008). This way, a more correct picture of the true quality of a safety-related journal is acquired. For example, some journals are more industry-oriented and therefore do not display high impact factors but are very highly regarded by the readership of safety journals, whereas other journals may display high impact factors, but are hardly read and/or appreciated by safety experts.
This article uses a survey to identify researchers’ perceptions on the quality of safety journals. The Spearman’s rank correlation test is used to this end. We also controlled for a potential bias, caused by the relative representation of the different nationalities (Europe, North-America and the Rest of the World). Furthermore, our paper investigates the level of correlation between the expert opinions rankings and the ISI impact factors rankings.
We investigated safety journals’ quality by using two different measurement methods: one based on perception of safety researchers, and one based on the number of citations to journals (reflected by the journal’s impact factor). Although the ranking obtained from safety experts (indicating the ‘perceived quality’ of the journals) was positively correlated with the ranking based on 2009 impact factors (indicating the ‘objective quality’ of the journal), tests show that the two rankings differ significantly. The expert based ranking was also positively correlated with the 5-year impact factor ranking, whereby these two rankings did not significantly differ (using a 0.95 confidence interval).
We showed that the results are not biased by the respondents from one of the world segments, since the evaluation of the listed journals is statistically identical in any of the three cases where clusters of world segment rankings are used (Europe + North-America/Europe + ROW/North-America + ROW) and compared with single segment rankings (Europe/North-America/ROW). More detailed analysis suggests that the valuation of some individual journals might be different between continents.
We finally drafted a top five of most influential journals by using both the quality as perceived by safety researchers on the one hand and the objective quality measured by numbers of citations on the other hand as quality measurement criteria.