آموزش جلوتر از زمان: چگونه ارزیابی پیش بینی اضافه به افزایش اعتمادممکن است ، یادگیری سازمانی و سیاست گرا در آینده و استراتژی
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|4083||2012||7 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 6100 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
- تولید محتوا با مقالات ISI برای سایت یا وبلاگ شما
- تولید محتوا با مقالات ISI برای کتاب شما
- تولید محتوا با مقالات ISI برای نشریه یا رسانه شما
پیشنهاد می کنیم کیفیت محتوای سایت خود را با استفاده از منابع علمی، افزایش دهید.
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Futures, Volume 44, Issue 5, June 2012, Pages 487–493
Evaluation of futures research (foresight) consists of three elements: quality, success, and impact of a study. Futures research ought to be methodologically and professionally sound, should to a certain extent be accurate, and should have a degree of impact on strategic decision making and policy-making. However, in the case of futures studies, the one does not automatically lead to the other. Quality of method does not ensure success, just as quality and success do not guarantee impact. This article explores the new paths for understanding evaluating of futures studies that are provided by the various articles in this special issue and sets out an agenda for next steps with regard to evaluation of futures research. The more structural and systematic evaluation can result in an increased level of trust in futures research, which may in turn lead to more future oriented strategy, policy and decision making. Therefore, evaluation should be seen as more than a burden of accountability – albeit important as accountability is – but as an investment in the credibility and impact of the profession. It may set in motion a cycle of mutual learning that will not only improve the capacity of futures-researchers but will also enhance the capacity and likeliness of decision-makers to apply insight from futures research.
Many people know two stories about foresight. Most people know about the Shell-scenarios that ‘predicted’ the oil-crisis, at least that is what people recall. And most people know about IBM that saw only a market for a handful of mainframe computers; or stories alike. In this special issue we have attempted to increase our understanding of evaluation of futures studies that goes beyond blunt accuracy, rigid methodology, or plain use. Not because we do not like accurate foresight or rigorous methods, but because practical experience and theoretical insights learn that there is more to it than that. The papers in this special issue set out to explore that issue and come up with empirical and conceptual findings that give us new insights into how the evaluation of futures-study works and what that means for the profession. In this concluding article, we attempt to explore these arguments further and formulate some common denominators that can lead to next steps that may take the complicated issue of evaluation of futures-studies further. We will begin with a discussion of the various papers. After that, we will deduct some of the recurring themes and dilemmas of the papers and discuss the consequences of them. We will conclude our special issue with an attempt to set out two basic strategies for the further development of the field with regard to the evaluation of futures-studies.
نتیجه گیری انگلیسی
All in all, this special issue was not about evaluation of isolated futures-studies, but about improving the profession as a whole. Furthermore, the value of improving the profession may go far beyond the profession as such. The various papers argue that an increased trust and credibility of the profession as a whole can help decision processes in organizations to become more future-oriented. Therefore, more systematic and appropriate evaluation of foresight may lead to more future-oriented policies, strategies and decision. Paradoxically, that may be the ultimate ‘interference’ of measured and measurement; improving the evaluation of foresight-studies will ultimately increase the outcomes of it and therefore make the measurement itself invalid. We consider such in-validity highly favorable. Let us strive for it.