دانلود مقاله ISI انگلیسی شماره 15534
ترجمه فارسی عنوان مقاله

بیبلومتری و فناوری نانو: یک فرا آنالیز

عنوان انگلیسی
Bibliometry and nanotechnology: A meta-analysis
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
15534 2011 9 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Technological Forecasting and Social Change, Volume 78, Issue 7, September 2011, Pages 1174–1182

ترجمه کلمات کلیدی
استناد - مقایسه کشور - ضریب تاثیر - متریک - فناوری نانو
کلمات کلیدی انگلیسی
Citation, Country comparison, Impact factor, Metric, Nanotechnology,
پیش نمایش مقاله
پیش نمایش مقاله  بیبلومتری و فناوری نانو: یک فرا آنالیز

چکیده انگلیسی

As in other fields of science, bibliometry has become the primary method of gaging progress in nanotechnology. In the United States in the late 1990s, a period when policy makers were preparing the groundwork for what would become the National Nanotechnology Initiative (NNI), bibliometry largely replaced expert interviews, then the standard method of assessing nanotechnology. However, such analyses of this sector have tended not to account for productivity. We hope to correct this oversight by integrating economic input and output measurements calculating academic publications divided by the number of researchers, and accounting for government investment in nanotechnology. When nanotechnology journal publication is measured in these ways, the U.S. is not the leader, as has been widely assumed. Rather, it lags behind Germany, the United Kingdom, and France.

مقدمه انگلیسی

Bibliometric analyses of science, technology, and engineering have mushroomed in recent years. Researchers typically use this method to trace the quantity of output, usually defined as academic journal articles or patents, then compare and rank the state of science and engineering in different nations. Over the years, this has become a widely accepted benchmark. But quantifying the practical value or economic productivity of knowledge produced through the systematic study of nature is extremely difficult. Developing the science of the assessment of science has been a protracted and troubled affair, as Benoît Godin notes. The question of science productivity began to be seriously considered following the emergence of professional disciplines of physical science around the mid-nineteenth century. Productivity was then defined by statisticians using simple quantitative metrics based initially on the total number of scientists in a given nation and subsequently on the total number of papers produced by individual scientists. The problem became much more complicated in the 1920s and 1930s, when governments became interested in developing means of measuring the contribution of knowledge to economic growth. Following the Second World War, the issue sharpened thanks to the popularization of the idea that basic science was the essential ingredient in radical technological innovation, and, hence, economic development, and the decision of the U.S. federal government to sponsor large-scale programs of basic science [1], [2] and [3]. Government efforts to measure and account for these programs encouraged contractors to develop a linear innovative structure based on segregated organizational units of research, development, and manufacturing. Defining the productivity of non-mission, undirected basic research was especially contentious, provoking fierce debates and conflicting findings in the 1960s [4]. Over the years, however, the methodology of the science of the assessment of science productivity remained essentially unchanged. It continued to be based on quantity of outputs, typically academic journal articles or patents. In 1973, Congress mandated the National Science Board to publish Science and Engineering Indicators, which became an authoritative index of the state of science and engineering productivity in the U.S. [5]. By the 1990s and 2000s, bibliometric analysis of science, technology, and engineering activities was becoming the “customary” indicator of research output in a number of countries [6]. The question of productivity is especially pressing in the case of nanotechnology. Its proponents have framed this interdisciplinary field as a novel and especially fecund form of applied science, one some famously suggested might be capable of triggering a new industrial revolution [7]. Nanotechnology boosters emerged in the U.S. in the early 1990s, a period when science policy culture increasingly emphasized federal government-backed R&D as the primary means of closing the gap with America's economic competitors [8] and [9]. It is no coincidence that nanotechnology discourse in policy circles has been most prevalent in the U.S., where the belief in basic science as an economic driver has been strongest. Nevertheless, similar assumptions took root elsewhere, as R&D budgets swelled in a number of other countries over the last three decades. And although research in nanoscale science, engineering, and technology was performed abroad in the 1990s, these activities assumed greater prominence after the NNI was introduced in early 2000 [10] and [11]. As nanotechnology's prestige as a cutting-edge utilitarian frontier field grew in science policy communities and expectations for an economic dividend mounted, so, too, did bibliometry assume increased importance. But the science of assessment itself has attracted as much scrutiny as the productivity claims of the basic science community [12]. Critics note that the emphasis on quantity of publications can foster a herd mentality, encouraging trends that sometimes yield poor science. Some critics trace the problem to the current incentive regime in the sciences, where output is not directly proportional to the effort invested, unlike some other fields. For example, this system does not value ‘failed’ but useful negative data [13]. Productivity claims for nanotechnology are even more problematic than for other areas of science and engineering both because of the high expectations associated with the field and the tendency of its proponents to subsume existing physical science disciplines under its rubric. As a number of scholars have noted, nanotechnology advocates presented old arguments for the economic utility of science in a new form [14], [15] and [16]. Accordingly, it is imperative to carefully review the ways bibliometry has been used to assess nanotechnology. Perhaps surprisingly, previous bibliometric studies have tended not to account for productivity in nanotechnology publication. We hope to correct this oversight via two indicators: calculating the academic publications divided by the number of researchers and the resources invested in nanotechnology. We believe the resulting assessment of relative national efficiency provides a more accurate measure than the current metric of academic publication, which obscures the meaning of resource efficiency and tends to promote only quantitative increase.

نتیجه گیری انگلیسی

When productivity per researcher and investment is considered, then, the U.S. is not the leader in nanotechnology publication. It trailed its European counterparts in all studies, behind Japan and Korea in data based on Kostoff et al. [24] and [76] and Zhou and Leydesdorff [77], and behind Korea in Leydesdorff and Wagner [25]. When productivity per unit of research money is used as the benchmark, the U.S. still did not rival the UK, instead approximating German productivity. China outperformed all nations by this measure, a discrepancy that can be explained in terms of relative national purchasing power and cost of labor, which relate to the asymmetrical pace of industrial development and trade and monetary policies. When productivity is defined thusly, there is not much that U.S. science and technology policymakers can by themselves do to bridge the gap because the remedial tools are in the hands of Congress, the Federal Reserve, and top officials in the executive branch. However, there are other ways to define science productivity. We found that countries with relatively coordinated science and technology policies like Japan and the UK consider bibliometrically-defined science output as only one of several indicators of the productivity of those policies, not a goal of them. Accordingly, they do not frame science productivity solely in terms of quantitative growth and consider quality as well. Of course, this raises new difficulties. Although there is a general linkage between basic research and the commercial development of certain science-based technologies, it is difficult to correlate national efforts in basic science with national economic productivity. History demonstrates that many great ideas informing technological innovation and industrial products like penicillin, photovoltaic power, spintronics, or lithium manganese oxide energy storage, for example, were developed by individuals in countries that did not commercially exploit this knowledge, or were not the first to do so, or were not as successful in so doing as other countries, for a host of reasons. Exploring these reasons lays well outside the scope of our study. But our work highlights the need for comprehensive comparative research that relates national science and technology policy with national industrial policy and illuminates nanotechnology's relative place in these policies as a field developed and promoted primarily by state entities. We feel bibliometric measurement and its bias towards volume of publication reinforce a competitive, potentially harmful dynamics among researchers in the field of nanotechnology and in the physical sciences generally. Preoccupation with output in academic and policy discourse has helped overshadow the relationship with input as this pertains to productivity. Ongoing investment in increased publication output tends to be uncritically welcomed, but the resulting ‘gold rush’ mentality may result in lower quality and unmet expectations. Yet the bibliometric method does have a place. Useful in some circumstances, it should not be the only factor that informs policy. Like all other metrics, it can be used inappropriately, as Garfield, inventor of the Science Citation Index, cautioned more than a decade ago [36]. We do not argue that our productivity indictor should replace the quantitative publication measure. Instead, we hope our research can play a role in fostering debate about what indicators should be traced and how they should be integrated for evaluation.