همکاری مؤثر دانشمندان و الگوهای بهره وری از طریق نهادهای جدید : مراکز تحقیقاتی دانشگاه و سرمایه انسانی علمی و فنی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|4836||2010||12 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Research Policy, Volume 39, Issue 5, June 2010, Pages 613–624
This paper analyzes the effect of university research centers on the productivity and collaboration patterns of university faculty. University research centers are an important subject for policy analysis insofar that they have become the predominant policy response to scientific and technical demands that have not been met by extant institutions, including academic departments, private firms, and government laboratories. Specifically, these centers aim to organize researchers from across the disciplines and sectors which, collectively as a research unit, possess the scientific and technical capacity relevant to scientific and technical goals of the sponsoring agencies. In this paper, we measure the productivity and collaboration patterns of university researchers affiliated with a relatively large-scale and “mature” university research center to discern the effects, if any, of the center mechanism on individual scientists and engineers. Based on an analysis of longitudinal bibliometric data, the results from this case study demonstrate affiliation with the center to be effective at enhancing overall productivity as well as at facilitating cross-discipline, cross-sector, and inter-institutional productivity and collaborations.
University research centers and comparable arrangements constitute a key mechanism for the strategic use of science and technology for solving problems (Stokols et al., 2008). Policy scholars’ interest in university research centers began after the establishment in the 1980s of the large-scale (in terms of budget and length of funding cycle) centers programs sponsored by the National Science Foundation, most notably the Engineering Research Centers (ERC) program. The original program was authorized by the US Congress in 1985, with an initial budget of $10 million (Bozeman and Boardman, 2004).1 The creation of the ERC program was an explicit policy response to the perceived economic competitiveness crisis with Japan (Suh, 1986) and was one of a number of mechanisms employed during those years to help bridge the divides between university research, education, and industrial innovation.2 Today, the ERC program is still considered in such a strategic light, having recently been modified in response to current concerns over US competitiveness (Lal et al., 2007). Accordingly, assessments of university research centers and their effects, including but not limited to ERCs, have focused predominantly on the benefits afforded industry partners, including the conduct of applied and commercially relevant research (Gray et al., 2001) and access to upstream modes of knowledge and to students for hire upon graduation (Feller et al., 2002). Few studies have addressed the publication patterns of center-affiliated university faculty. This is surprising for a number of reasons. First, the predominant mode of knowledge dissemination for university faculty is publishing, and aggregate statistics on scientific output generally are considered valuable for assessing the rate and quality of scientific production (van Raan, 1996), including for assessment of R&D organizations (Geisler, 1994).3 Changes in the publication patterns of scientists, particularly ones triggered (whether deliberately or not) by new institutions, are of great interest to science policy makers (National Academy of Science, 2007 and Stokols et al., 2008). Second, the primary operationalization of research collaboration in science and technology policy analysis and research evaluation is co-authorship (e.g., Katz and Martin, 1997). As university research centers are policy tools for fostering collaborative networks that create cross-disciplinary and cross-sector synergies to further a field of research and development (Boardman and Corley, 2008), one would expect bibliometric study of university research centers and their scientists, especially regarding center scientists’ publications that are co-authored across institutional, disciplinary, and sectoral boundaries. Perhaps one reason there has been so little study of the publishing patterns of university research centers and their scientists is that the manner in which centers may affect individual publishing activities is not sufficiently clear. On one hand, many researchers choose to affiliate with centers to increase their publishing productivity (among other motivations, see Landry and Amara, 1998). In affiliating with a center, researchers may augment their “scientific and technical human capital” (Bozeman et al., 2001) and, with it, their respective abilities to conduct research of different types and publish the results. On the other hand, many centers are focused on modes of knowledge production that may not be as conducive to publishing as to other forms of dissemination, such as informal knowledge exchange (Ponomariov and Boardman, 2008) and patenting (Dietz and Bozeman, 2005). Moreover, the problem of “additionality” (Georghiou and Roessner, 2000) is omnipresent in the evaluation of policy mechanisms like university research centers. An essential but thorny evaluation question is precisely the extent to which changes in publication patterns may be attributed to the operations of university research centers, versus alternative explanations. The purpose of this paper is to assess the effect of affiliating with a “mature” university research center – the Mid-America Earthquake (MAE) Center, an ERC established in 1997 and headquartered at the University of Illinois at Urbana-Champaign – on the publication patterns of the faculty affiliated with this center. The MAE Center provides an excellent opportunity for bibliometric analysis given that it has reached successfully the conclusion of its funding cycle with the NSF (ten years, with a review and renewal at 5 years) and therefore has had the maximum time (at least under the auspices of the ERC program) to have an effect on the university faculty working there. Another reason the MAE Center provides a particularly good case for developing a better understanding of how the center mechanism may affect the publishing patterns of university faculty is that the MAE Center is part of what many consider in the ERC program to be the flagship university research centers program in the US and abroad.4 Thus, as a singular case study, the MAE Center is of significant “instrumental” value (Stake, 1995 and Yin, 2003) insofar that the knowledge produced by an examination of how this center has altered the publication patterns and rates of its affiliated faculty can inform policy and management decision making for centers and centers programs more broadly. There are currently thousands of research centers on American campuses to date, and centers and centers programs have become the hallmark of national- and regional-level science and technology policies in most developed countries. While a single case like the MAE Center will not allow for broad conclusions regarding the general effects of the center mechanism, it can be instrumental in developing policies and management strategies for centers and centers programs insofar that so little is known about how centers alter the knowledge production patterns and rates of university faculty. In this paper, we assess changes in the publishing of university faculty once they affiliate with the MAE Center, using longitudinal data from before and after the faculty joined the center (the analysis is based on scientists’ complete publication histories). We combine bibliometric and survey data5 to assess publishing patterns in a number of ways that speak directly to the primary goals of centers like the MAE Center and instrumentally to centers programs like the ERC program: cross-discipline, cross-sector, and inter-institutional research collaborations. We operationalize collaborations as publications authored conjointly by university faculty and other same-university researchers, researchers in industry, and at other universities as well as number of collaborators of different type.6 In addition to the collaboration goals of centers, we also use the MAE Center case to assess the effect of center affiliation on the productivity of university faculty. Therefore, the longitudinal analysis also includes overall yearly publication rates. This additional focus is important for addressing the extent to which center affiliation detracts from or enhances traditional academic behaviors and outputs, which has been an ongoing debate regarding not just university research centers with industry-related missions but also regarding other policies and institutions aimed at facilitating university–industry interactions (see Slaughter and Rhoades, 1996). While this case study is not general enough to resolve the debate, it constitutes one of the first direct empirical tests of the claim that centers detract from traditional modes of dissemination by university faculty. The perspective that guides our analysis is the scientific and technical human capital perspective (Bozeman et al., 2001), which emphasizes individual-level research capacity and how it may be affected by professional linkages and network ties, including but not limited to linkages and ties made by way of affiliation with a university research center. Given the general purpose of government centers programs to facilitate collaboration (Boardman and Corley, 2008) and to develop research capacity that is different from that developed in traditional academic departments (Bozeman and Boardman, 2004 and Ikenberry and Friedman, 1972), and given that we focus on collaboration patterns and publication rates that are not standard practice for academics (e.g., papers authored conjointly with industry researchers), the scientific and technical human capital approach is appropriate for the current analysis.7 The next section provides a brief case description of the institutional composition of the MAE Center over time, demonstrating that, since the ERC's establishment in 1997, its participant base has been comprised consistently of researchers with divergent disciplinary backgrounds working at numerous universities or in private companies. Such a description is prerequisite to presenting the scientific and technical human capital (S&T human capital) framework we propose in the next section, which includes hypotheses regarding the impact of affiliation with the MAE Center on individual publishing patterns and rates. The following section describes the data and methods and includes discussion of how our research design accounts for all of the key threats to internal validity. After presentation of the empirical results, which show affiliation with the MAE Center to influence both the patterns (e.g., university–industry, interdisciplinary) and rates of publishing by university faculty, we conclude with a discussion of the policy and management implications of these findings, as well as directions for future research.
نتیجه گیری انگلیسی
The results presented in this study suggest that affiliation with a university research center affects the behavior of affiliated faculty in ways consistent with the common emphases and goals in such center programs: increased productivity, collaboration (including with industry and with colleagues from other institutions), and interdisciplinarity. Though co-authorship and university-industry co-authorships are by now widespread and well-known phenomena (NSB 2008, Vol. 1, NSF-SRS 2004), the findings presented above are among the first to attribute these trends – albeit just for the specific case of the MAE Center – to institutional arrangements like university research centers. Though access to opportunities and resources provided by the center has positively affected publication productivity overall, the strongest impact seems to be in the collaborative behaviors underlying this publication activity. This set of effects is important to emphasize, as it is precisely the combination of these effects that enhances the claim that university research centers represent mechanisms to influence the behavior of scientists towards ends deemed desirable by the sponsoring mission agencies. While these results are “positive” in terms of the direction of effect of MAE Center affiliation on faculty collaboration and productivity, it is important to note that we report these results agnostically. Whether the outcomes we find here constitute benefits (e.g., increased co-authorship) or costs (e.g., decreased research autonomy, “free rider” co-authorship) remains an open debate, one we are unable to engage with the data and design used for this study. The mechanism for such effects was articulated through the lens of S&T human capital. Specifically, the overarching expectation of the study was that the effects of affiliation would be discernible in faculty affiliates’ respective scientific activities, with the particular configuration of S&T human capital provided by the center being reflected in these activities. The findings support such a mechanism and suggest that the S&T human capital concept is not merely descriptive, but additionally is relevant for designing and evaluating policy mechanisms aimed at influencing scientists’ behaviors and production. However, we acknowledge that the S&T human capital perspective draws heavily on alternate approaches to understanding research collaboration and scientific production (e.g., the resource-based view), and that multiple perspectives and levels of analysis must be considered when explaining the impacts of institutional arrangements like the MAE Center on university faculty behaviors and outputs. To the extent that implications for broader policy can be drawn from a single case study, we feel the demonstration that the behaviors and production of scientists can be changed within relatively short spans of time instructive. Such a finding is notable in the context of scientific community, notorious for its adherence to its strong traditional norms of self-regulation and unrestrained intellectual freedom. While such norms and traditions are indeed rigid and resistant to change, the scientific community seems to respond to appropriately designed sets of incentives and constraints. University research centers, at least those similar to the MAE Center, may be such a set. While providing discernible incentives to scientists in way of enhancing their S&T human capital – e.g., access to grant funding, equipment and instrumentation, collaborators, graduate students, partnership opportunities with other institutions and with industry, the ability to work on large and complex projects beyond the reach of individual investigators (Boardman and Bozeman, 2007 and Landry and Amara, 1998) – centers like the MAE Center seem to offer these incentives in a fashion commensurate with center and programmatic priorities to steer scientists towards utilizing center resources and opportunities in ways consistent with these priorities. Future research on the effects of the center mechanism on the conduct of scientists will benefit from better data, for sets of centers, either for the same centers program or across programs for the same agency. Such research would be greatly facilitated by uniform requirements from the sponsoring agencies for collecting appropriately structured bibliometric data. Moreover, while bibliometric data have numerous advantages (e.g., they are standardized and therefore reliable), future research could benefit by pairing bibliometric approaches with other methods and data, such as qualitative and survey-based research, to further elaborate the mechanism suggested by the MAE Center case here. While there has been extensive case study of singular centers, there has not been multi-case comparison using both bibliometric and qualitative approaches. Future research could also benefit by new research questions. An important issue that needs to be addressed relates to how durable or lasting are center effects. Though centers and centers programs attempt to facilitate lasting changes in how scientific and engineering education and research are conducted in the long term, the demonstration of relatively short term impacts (e.g., in this study, in external evaluations) tells very little about the sustainability of center impacts. Perhaps it is too early for this sort of assessment. But probably not. ERCs, like the MAE Center, date back to the mid-1980s, and there are other centers programs that date earlier than ERCs. The challenge of course is that for many of these historic centers that are no longer under the auspices of their founding centers programs, data are scarce, which reinforces our concern that centers programs begin to approach data collection and storage more systematically. While we do not advocate the use of standardized performance measures and metrics across all centers, even within the same centers program, currently there is a dearth of information for systematically assessing the sustained impact (or lack thereof) of the predominant science and technology policy mechanism in the US and many countries in Europe and Asia.