آیا تنوع در گروه های مشاوره و مباحثه دلفی مفید است؟ مدارک و شواهد از پیش بینی اعمال فرانسه در مورد آینده انرژی هسته ای
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|1031||2011||12 صفحه PDF||سفارش دهید||1 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Technological Forecasting and Social Change, Volume 78, Issue 9, November 2011, Pages 1642–1653
This paper further enhances the analytical power of Delphi methodology by identifying the advantages, disadvantages and challenges presented by increasing diversity among panel groups. Using Delphi survey data on the future of nuclear energy in France, we analyze the origins of the variety of judgments within and between two panels: one of experts and one of laypeople. We investigate the determinants of the stability of those opinions both in one given round and over several rounds of opinion-formation. We reach an apparently paradoxical conclusion: that non-expert judgment is less stable, but not necessarily less accurate, than that of experts, judgments on the part of experts sometimes being clouded by self-interest. Apart from highlighting some factors underlying the controversy over nuclear power, our paper calls for greater participatory democracy in Delphi panels, but also demonstrates the limits of such an extension.
The Delphi method appears to be a tool of modern foresight and forecasting activities favored by many countries. The technique is seen to be an efficient procedure either to “obtain the most reliable consensus of opinion of a group of experts… by a series of intensive questionnaires interspersed with controlled opinion feedback” [1, p. 458] or to “identify dissent or non-convergence” [2, p.3]. In this procedure panelists are asked to give an initial opinion on a given topic, and are then given access to ideas expressed by others (the status of respondents not being provided), after which they are able to revise their original opinion in the light of the feedback they receive. This iteration process is repeated until a minimum of stability in panelists' responses is reached. Such consensus – measured by a reduction in the variance of judgments over a number of rounds – is commonly observed in the literature on the Delphi method, either when panelists are experts, or when student panels (non-expert panels) are analyzed, these two types of panels accounting for most of the existing studies. It might be noted that the issue of the impact of panel composition on Delphi performance has seldom been investigated, even though [3, p. 372] conclude that “the validity of the technique will depend on the nature of the panelists and the task they have”, calling for “more experiments to examine how expertise interacts with aspects of the Delphi technique and how it relates to accuracy improvement over rounds”. Finally, while many empirical studies have made use of the Delphi methodology for foresight  or international comparison , this paper aims to add to the existing literature by addressing the methodological issues associated with this tool, dealing with the principal criticisms expressed in the literature regarding the selection process of experts, the composition of panelists , and their joint impact on the analytical power of the Delphi technique. This approach resonates with problems observed in contemporary techno-economic and political spheres  and . Recent examples of rising public skepticism towards scientific and technological discoveries abound: genetically-modified organisms (GMOs), mad cow disease in Western Europe, climatic change, the use of bisphenol A for producing baby bottles, and so forth  and . The multiplication of such controversies challenges the idea of Sound Science, that is, the appropriateness of decisions exclusively founded upon the knowledge of scientific experts. The role of participatory democracy in activities involving prediction is increasing, and questions the relevance of panels exclusively composed of experts, leading to calls for the inclusion of non-experts in forecasting panels. But the resort to participatory democracy should not only be motivated by ethical motivations; this should also be a means of ensuring analytical precision. One might therefore wonder whether the inclusion of non-experts among Delphi panelists might contribute to a greater appropriateness of decisions based on Delphi results and a greater readiness on the part of the general public to accept those decisions  and . The present study is directed to an empirical analysis of the advantages and disadvantages of diversity among Delphi panelists. Our ultimate goal is to test whether diversity of opinions might lead to a greater robustness, and might thereby facilitate political decision-making. Indeed, understanding whether or not different groups of people (experts vs laypersons) rely on divergent rationalities, and whether they are nonetheless able to reach a consensus, could be of assistance in composing reliable panels of inquiry when technological forecasting and political decisions are at stake. We will test our research hypotheses on the nuclear sector. Several factors account for our choice. First, during recent years and as in controversy over GMO or climate change, the use of nuclear power for civilian purposes has frequently been subject to important public controversies, sharpened in the wake of major nuclear accidents (Chernobyl in 1986; Fukushima in 2011) and extending beyond scientific concerns. The adoption of more radical opinion, both among experts and laypersons, can be anticipated. Second, knowledge in this domain is very complex and constantly evolving, creating a clear divide between laypersons and experts, or even among experts. Third, decisions in the domain of civil nuclear energy have a major public impact and this is likely to influence energy policy at the State level.1 Finally, the paper is organized as follows. We first present a survey of the impact of the composition of panels on the forecasting process and its performance so that we might construct some research hypotheses and also develop our analytical model. We point out that discrepancies in judgments made by experts and laypersons can generate diversity. Subsequent sections are devoted to the empirical testing of the working hypotheses elaborated in the first part. We conduct an empirical analysis using data collected during a technological foresight survey on the future of nuclear energy in France. This survey uses the Delphi method and involves a panel that includes both experts and non-experts. We investigate the diversity of judgments within the two populations and over repeated rounds in order 1) to test for differences and 2) to investigate their origins and stability. Finally, we consider the relative merits of including non-experts in Delphi panels, and also provide some practical recommendations to Delphi users.
نتیجه گیری انگلیسی
The main aim of this paper was to investigate whether the inclusion of non-experts in forecasting activities necessarily gives rise to less-objective, less stable and less accurate assessments. Specifically, we analyzed the consequences and challenges raised by forecasts and decisions built on the results of Delphi surveys run on heterogeneous panels including both experts and non-experts, taking nuclear energy as a specific illustrative case. Our analysis shows that the level of expertise in the sector under analysis dramatically influences an agent's opinion in a controversial topic such as nuclear power. This finding may be interpreted as evidence of diverging initial viewpoints between experts and laypeople over the value of nuclear technologies for the future of Society. Hence our case study, together with an original definition of experts and laypeople, confirms that using Delphi analyses with heterogeneous panels could be worthwhile as a means of increasing variety among first round opinions. This variety should be encouraged because the original judgments expressed by experts may be less accurate and sound than traditionally assumed in the literature, and by experts themselves. Indeed, we find that some experts may be subject to self-serving bias during their evaluation process. Our results thus exhibit the virtues of diversity among panelists: it generates variety of opinions in the first round, which in turn can be a token of scientific robustness, contrary to more homogeneous but sometimes biased expert opinion. Globally, in line with Callon et al., one may conclude that the composition of assessment panels and discussion groups has to include experts and laypersons, even if this might imply a slow-down in the pace of the decision-making process. Increased variety in discussion groups and in foresight panels constitutes a prerequisite for the quality of the results. It allows the overall opinion of the group or panel to move towards the needs and expectations of Society, to avoid possible bias and, last but not least, to legitimate their advice vis-à-vis Society (although this has not been tested in the present paper). The next question now becomes how one might reconcile first-round heterogeneous opinions formed by heterogeneous panelists, for the purpose of facilitating public decision-making. Indeed, for the Delphi technique to be useful for policy makers one should still consider an appropriate way of reconciling first-round diverging judgments, and to allow for the emergence of a consensus. Are disagreements between experts and laypersons unalterable, or is it possible to achieve consensus through a process of change over successive rounds of judgments? In our specific case, panelists were only allowed to change their judgments of the forecast of the time schedule for the development of the technology. We found that if laypersons have original ideas (in the first round of the poll), their opinions are less stable than those expressed by experts over subsequent rounds. Indeed, the original dispersion tends to reduce towards the experts' original views (the most stable ones) by iteration, reflecting laypersons coming to agree with expert (potentially biased) views. Thus the original divergence vanishes after several rounds, what reduces the usefulness of introducing first-round diversity. However, such diversity retains its worth at least for ethical reasons, laypersons (and general public) having the opportunity to express their views on controversial topics. One has however to keep in mind that we reach this conclusion for a particular type of forecasts, i.e. the time schedule for the practical implementation of different nuclear-based technologies. Good forecasts in this case are mainly rooted on experiential and highly specialized technological knowledge. One may therefore wonder whether such a conclusion holds for opinions involving to a greater extent personal beliefs and values (and less technological knowledge), as it is the case for the importance awarded to a given technology, or more generally when societal choices are at stake. In those cases, laypersons may be less willing to legitimate experts' judgments and less ready to revise their original opinions (and converge towards the experts' ones). In this type of forecasting exercises, the inclusion of laypeople in panels would thus be useful to provide the opportunity for experts to encounter wider arguments. Should the panelists not reach consensus, diversity in the panel composition would nevertheless be of interest to exhibit fundamental dissent among citizens. A possible direction for further research would therefore be to test this intuition and explore whether the likelihood of consensus depends on the nature and scope of the forecasts participants have to do (selection of technological priorities for a country vs practical implementation of a given technology). There are additional caveats one should bear in mind and which might explain our results: while our sample includes less-expert participants, the participants are still from the energy sector and hence probably supportive of (and consensual on) it. Participants that are even more “lay” (e.g. with a scientific background but from a non-energy sector) might be less willing to conform to the consensus view. Moreover, we ran our econometric analysis on a very limited sample of experts, what might explain the limited dispersion of their original opinions and their reduced propensity to change their minds as compared to laypersons. Third, panelists only receive basic feedbacks in the present case study. They do not exchange the rationales for their respective predictions. Neither are they informed about the level of expertise of the other panelists. All this could impact the degree of opinion change in panelists over rounds . In such a context, does Delphi need recasting by taking greater account of non-expert opinion? If yes, then how? The question remains open, but we suggest hereafter some directions for further research, considering the limits of the present paper. A renewal of the Delphi method could consist in highlighting the heterogeneity of judgments by giving all panelists feedbacks on the median opinion of experts on the one hand, and of laypersons on the other, thus stressing the existence of subgroups within the panel. In that case, “conflict can improve the perceived quality of the decision […] but it may weaken the ability of the group to work together in the future” . Indeed, if aware of the other group's position, a given group might be tempted to strengthen its original judgment in order to reduce the likelihood of the opposite judgment becoming dominant. As stressed by Wright and Rowe , the other group's judgment might be rejected not only because members of the first group perceive a significant divergence over values between themselves and second group, but also because they know the second group might suffer from self-serving bias . Thus as soon as one can see that another panelist has a divergent judgment because he/she belongs to a self-interested group, the likelihood for the former to change his/her mind decreases. Since we show that experts' opinion might also be biased, convergence towards experts' judgments does not seem that automatic and may require additional iterations, without any guarantee of success, when success is measured in terms of consensus (which is not always the case ). Finally, there could be a danger in using Delphi and forming mixed panels (of experts and non-experts) together with new feedback processes, since it would probably reduce the likelihood of reaching (potential) consensus, but at the same time would increase the difficulty for policy-makers seeking to take decisions based on Delphi (diverging) outcomes. On the other hand, as mentioned before, identifying dissent may also be valuable in alerting policy makers to the need for better communication and additional argumentation on the technological choices they made for the country, or to the need for a totally new policy. To conclude: much remains to be done before there is a definite answer over how the Delphi method might be recast to improve its capacity to account for heterogeneous opinions, and promote participatory democracy.