تضمین کیفیت ارزیابی زیست محیطی استراتژیک : ارزیابی و بهبود سازگاری قضاوت در پانل های ارزیابی
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
5665 | 2004 | 23 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Environmental Impact Assessment Review, Volume 24, Issue 1, January 2004, Pages 3–25
چکیده انگلیسی
Assessment panels and expert judgment are playing increasing roles in the practice of strategic environmental assessment (SEA). Thus, the quality of an SEA decision rests considerably on the quality of the judgments of the assessment panel. However, there exists very little guidance in the SEA literature for practitioners concerning the treatment and integration of expert judgment into SEA decision-making processes. Subsequently, the performance of SEAs based on expert judgment is often less than satisfactory, and quality improvements are required in the SEA process. Based on the lessons learned from strategic- and project-level impact assessment practices, this paper outlines a number of principles concerning the use of assessment panels in SEA decision-making, and attempts to provide some guidance for SEA practitioners in this regard. Particular attention is given to the notion and value of consistency in assessment panel judgments.
مقدمه انگلیسی
Strategic environmental assessment (SEA) is gaining widespread recognition as a tool for supporting the sustainable development of the environment through policy, plan and program decision-making processes. In recent years, we have witnessed a growing body of literature addressing SEA principles (e.g. Noble, 2000, Partidario, 2000 and Therivel, 1993), methodology (e.g. Noble and Storey, 2001, Brown and Therivel, 2000 and Verheem and Tonk, 2000), and performance criteria (e.g. Fischer, 2002, IAIA, 2002 and Nitz and Brown, 2000). However, as Bonde and Cherp (2000) suggest, SEAs remain less than satisfactory and improvements in the quality of SEA decisions are required (see, e.g. Hazell and Benevides, 2000 and Curran et al., 1998). The challenge is to consider how practitioners can ensure the quality of strategic decisions given the relatively broad-brush nature of SEA, and the increasing emphasis placed on the role of assessment panels, or groups of informed individuals, selected to assign impact assessment judgments based on experience and expertise. The argument presented here is that the limitations to improved SEA decision-making are largely due to the way in which assessment judgments are analyzed, treated, and applied in the SEA decision process. As more complex evaluation and decision-making methods and techniques are used in SEA, there is an increasing reliance placed on the judgments and expertise of assessment panels; however, there is very little guidance available to practitioners concerning its use and treatment. Based on the lessons learned from recent practice expert-based strategic- and project-level impact assessment case studies, and drawing particularly upon the results of an expert-based SEA of Canadian energy policy (Noble, 2002), this paper attempts to provide some guidance to practitioners on the way in which assessment judgments are solicited, evaluated, and integrated into SEA decision-making processes. The author elsewhere reports on the results of the SEA case study in detail (see Noble, 2002). What follows is a discussion of quality assurance in SEA decision-making, including guidelines for soliciting and analyzing expert judgment, and a detailed discussion of the notion and value of assessment consistency—an issue that has received insufficient attention (if at all any) in the SEA literature.
نتیجه گیری انگلیسی
One measure of the quality of SEA performance is the quality of the assessment judgments. As SEA practice continues, and more complex evaluation and decision-making techniques are used, the assessment panel is playing an increasingly important role. The problem relating to SEA quality assurance and decision consistency, however, is that there exists very little guidance for SEA practitioners concerning the use and treatment of expert judgment in strategic assessment processes. This paper set out to provide some direction in this regard. Based on the above discussion, and on the lessons learned from the author's SEA application (Noble, 2002) and other similar expert-based assessments at the plan, program and project level, several guidelines and considerations for the use of assessment panels in SEA practices emerge: ■ First, there is no best method for selecting an assessment panel. Panel size and composition rest on the SEA objectives, the socio-political assessment context, data and information requirements, available time and resources, what will be credible, and is thus dependent on setting and situation. The knowledge and experience of the assessment panel should be reflective of the technicality and complexity of the assessment issue. ■ Second, the value of ‘expert’ judgment is often overrated in assessment panel decision-making at the strategic level. SEA practitioners must learn to stringently evaluate the ‘expert’ decision and to consider when the input of local parties and affected interests might be more valuable in leading to SEA decisions. ■ Third, making accurate impact predictions is difficult at the project level. This is only exacerbated as one moves to the strategic-levels of decision-making. The accuracy of impact predictions, although important, is not a sufficient measure of SEA quality performance. ■ Fourth, consensus is neither a necessary nor a sufficient condition for SEA decision-making, and should not be an indicator of SEA quality. The same lack of knowledge that required the use of an assessment panel likely means that panellists will disagree. SEA decision-making should highlight dynamism and pluralism, rather than hide them, such that the final SEA decision-maker has sufficient information to make an informed choice. ■ Finally, the quality of impact assessment judgments in SEA rests significantly on the consistency of the assessment panellists. At the strategic level, particularly when dealing with broad-brush policy issues, consistency is not an accurate reflection of expertise, but rather of informed decision-making. SEA analysts should not assume that the expert judgment is more credible than that of the non-expert. Important to the quality of the SEA is that impact assessment judgments and the final SEA decision not be based on information that is inconsistent and contradictory. The relationships in any SEA system should generate, to the greatest extent possible, a consistent SEA decision structure. In conclusion, the quality of SEA decisions rests significantly on the quality of assessment judgments. Thus, it is important that role and treatment of expert judgment receive more attention in SEA practice and in the impact assessment literature in general. There is no ‘best’-practice framework for SEA based on the use of assessment panels; however, several lessons can be learned from recent practices. This paper attempted to highlight these practices to provide some basic principles and guidelines to SEA practitioners and, ultimately, to contribute to quality improvements in the SEA process.