بررسی ارتباط نهایی استفاده از سازمان های پژوهشی بخش دولتی
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|8220||2004||15 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 8162 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Research Policy, Volume 33, Issue 1, January 2004, Pages 73–87
Measuring the effective impact of research and its relevance to society is a difficult undertaking but one that the public sector is keen to embrace. Identifying end-users of research and capturing their views of research relevance are challenging tasks and not something that has been extensively reported. The evaluation of end-use relevance demands a shift in organisational mindset and performance indicators away from readily quantifiable outputs towards a consideration of more qualitative end-user outcomes that are less amenable to measurement, requiring both a greater tolerance of ambiguity and a willingness to learn from the evaluation process.
Policy makers are increasingly under pressure to make sure that taxpayers’ money is spent well and produces useful and relevant research that represents good “value for money” (NAO Comptroller and Auditor General, 2000, NAO Comptroller and Auditor General, 2001 and HM Treasury, 2002). This is not solely a UK concern and is being addressed on the international science policy scene (Natural Resources and Environment, 2001a and Natural Resources and Environment, 2001b; Spaapen and Wamelink, 1999). However, it is perhaps a particular consideration in the UK where our reputation for excellent science and poor application gives an added impetus to ensuring that research is relevant and contributes both to the UK’s economic competitiveness and the quality of life of its citizens. This paper reports on some of the methodological issues raised by a study of end-use relevance conducted in Scotland on behalf of the Scottish Executive Environment and Rural Affairs Department (SEERAD). In autumn 2001 the Agricultural and Biological Research Group (ABRG) within SEERAD began a major research organisation assessment exercise of seven Scottish agricultural and biological research organisations1 using a system of peer review by Visiting Groups. The research organisation assessment exercise covered the period 1996–2001 and included an assessment of each organisation’s quality of science and knowledge transfer and exploitation as well as the end-use relevance assessment. The remit of the end-use relevance assessment, reported here, was to provide the Visiting Group with a briefing on the end-user interactions at the institute level, investigating the impacts and benefits of the research programmes in seven of the ABRG supported organisations. The study focused on a wide range of end-users and clients and the relevance to their needs of the research undertaken by the research organisations (ROs), reflecting SEERAD’s requirements to promote engagement with as wide a range of end-users as possible. Although the outcomes of this evaluation for each RO were confidential to SEERAD and are not reported here, the study raised a number of methodological issues pertinent to the wider assessment of end-use relevance and the societal impact of research and may offer some lessons for future development in performance measurement. Section 2 of this paper considers good practice in end-user relevance assessment through a short literature review; Section 3 outlines the assessment goals and Section 4 describes the research methodology in more detail. Section 5 reflects on the research methodology and outcomes, offering some insights on issues such as sampling processes, concerns about confidentiality, evaluation timescales, and the application of policy learning in the public sector. The final section draws some conclusions from these reflections that we hope will be useful in future evaluations of public sector research, particularly with respect to end-use relevance.
نتیجه گیری انگلیسی
The Royal Netherlands Academy of Arts and Sciences (2002) makes the case for a single, widely accepted methodology for the evaluation of societal impact (in applied health research). Our experience in assessing the end-use relevance of public sector research organisations in Scotland leaves us less convinced that it is possible, or desirable, to produce a standardised approach that yields an “off the shelf” toolkit for end-use relevance assessment. We believe that our methodology, which combines a range of qualitative and quantitative evaluation tools, including interviews, focus groups and a questionnaire survey in conjunction with desk research, documentary analysis and a stakeholder analysis, does provide an effective insight into both an institute’s end-use strategy and the perspective of the end-users on the institute’s performance. It is not, however, a “one size fits all approach” and has to be guided by the institute’s research mission and tailored to the individual institute’s circumstances. This flexible approach can lead to criticisms on the grounds of consistency, reproducibility and robustness. Nevertheless, the overall approach was regarded as helpful by the client and it did allow us to draw meaningful conclusions for each institute; to discriminate between the different end-user engagement strategies of the different ROs; and to evaluate the different end-users’ experiences of these strategies. The evaluation of research relevance is undoubtedly a challenging endeavour and we would caution against raising unrealistic expectations amongst end-users. Having started down the path of end-user engagement, how you meet and manage the future expectations of your users are crucial issues. Although the process critiqued in this paper provides a useful baseline evaluation, its application within the research institutes will be the true test of its worth. Only by embedding end-user relevance in their strategic research planning and by using the assessment process as a learning tool will the institutes and their end-users gain the real benefit of such evaluations. This requires ownership of the process by those being evaluated rather than regarding it as a peripheral activity required by a funder once every 5 years.