دانلود مقاله ISI انگلیسی شماره 5305
ترجمه فارسی عنوان مقاله

شروع در نقطه پایان: برنامه ریزی محصول ارزیابی توانمندسازی

عنوان انگلیسی
Start at the end: empowerment evaluation product planning
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
5305 2004 11 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Evaluation and Program Planning, Volume 27, Issue 3, August 2004, Pages 275–285

ترجمه کلمات کلیدی
جامعه سازمان - ارزیابی توانمند سازی - برنامه ریزی ارزیابی - گزارش های ارزیابی - یادگیری سازمانی - ارزیابی مشارکتی -
کلمات کلیدی انگلیسی
Community-based organizations, Empowerment evaluation, Evaluation planning, Evaluation reports, Organizational learning, Participatory evaluation,
پیش نمایش مقاله
پیش نمایش مقاله  شروع در نقطه پایان: برنامه ریزی محصول ارزیابی توانمندسازی

چکیده انگلیسی

Framing relatively detailed reports and other products before conducting the evaluation process helps novices who are learning to participate in organizational self-assessment. This article describes steps and provides templates to guide the practice of evaluation product planning as a part of participative and empowerment evaluation processes with community-based organizations (CBOs). The advantages of this approach, based in practice facilitating participative evaluation at multiple CBOs, are reduced resistance to evaluation and clear expectations among various stakeholders in the evaluation process. This contributes to fuller participation and teamwork and more thorough evaluation products.

مقدمه انگلیسی

Early evaluation product planning is one step in a process that aims to evaluate a particular program while building the organization's capacity to evaluate other programs and policies. The process has multiple outcomes: information for program monitoring and continuous quality improvement, information about results to guide value judgments about the worth of the program, development of knowledge and skills among the stakeholders who use this information, and other outcomes identified by the stakeholders. The stakeholders include program personnel and managers, board members, consumers, funders, and others with a critical interest in the program. The foundation for this process lies in principles of empowerment evaluation, which aims to ‘foster improvement and self-determination’ within organizations (Fetterman, Kaftarian, & Wandersman, 1996, p. 4). The principles derive from the long established practice of participatory action research (PAR) (Wadsworth, 1997, Whitmore, 1990 and Whyte, 1991), which integrates research into social change processes in ways that help people learn from their own experiences and share them with others. Typically, when empowerment evaluation is used, the goal is to conduct evaluation while also building the organization's capacity for future evaluation and continuous learning (Garvin, 1993; Stevenson et al., 2002, Stockdill et al., 2002 and Torres and Preskill, 2001). In human services, empowerment evaluation essentially places the people who provide and receive services as the participants who make critical decisions about the standards of success, program/organizational practices, lessons learned, and what to share with others. The empowerment evaluation process aims to be democratic, collaborative, and developmental (i.e. goals fit with the developmental stage of the program). Key issues in effective empowerment evaluation center about who participates, to what extent, and in what ways. Inclusion of diverse stakeholders involved in a program increases the likelihood that the evaluation will have meaning and be used. Michael Patton (1997) emphasizes the importance of the stakeholders' engagement in a process of reaction and reflection as the evaluation process yields information. Within an organization that practices systematic and strategic decision making, the communal process of using the evaluation leads to firmer commitment by stakeholders as the organization adapts for the sake of improvement and reduces resistance to change. Empowerment evaluation relies on the core processes of inclusion, strategic decision-making, and reflection. The principles and ideology of empowerment evaluation are easy to embrace, but the practice of any form of evaluation tends to be more difficult. Personnel at community-based grass roots organizations rarely have advanced formal training in management or evaluation. In my experience, the impetus for initiating evaluation capacity building has come from the funders. They retain evaluation coaches to work with stakeholder groups using a logical process (see, e.g., Dugan, 1996 and Early Head Start National Resource Center, 2002). After organizing the various stakeholders, the coaches help the group to specify the evaluation design and detail the procedures to conduct the evaluation. Together, they develop an evaluation plan that (1) identifies the purpose and design of the evaluation, (2) builds a logic model, (3) establishes a measurement plan and information system for data collection and analysis, and (4) develops a process for use of the evaluation, including report production. A work plan specifies who will make evaluative decisions about such matters as intended outcomes, measurement procedures, and data interpretation, report preparation, and when various tasks will occur. A typical evaluation coaching process proceeds as follows. The community-based organization (CBO) forms a stakeholder team (board, consumer, staff, funder's representative) to oversee the evaluation and meet with the coach. Particular individuals, usually staff, are designated as the primary participants. The teams meet regularly, bi-weekly at first, then monthly to coordinate the overall approach. The consultation begins with an extended orientation to evaluation. Between meetings, the team completes evaluation worksheets, implements evaluation activities, and corresponds with evaluation coaches by phone, fax, and email. Initial coaching efforts focus on demystifying the evaluation process, teaching evaluation basics, and developing an overall plan to support program development, implementation, and evaluation, with the importance of both process and outcome data emphasized. CBO program personnel begin to generate information and use it for program improvement soon after the program starts. Much of the work centers on decision-making prompted by such tools as planning forms, logic models, measurement plan grids, and design of tools for information gathering and reporting. As the evaluation proceeds and data are collected and analyzed, the CBO team meets to interpret the findings. Quarterly, semi-annual, and annual reports are developed. Often, organizations are part of a group of CBOs that receive funding from a common source, and interagency groups are convened quarterly to share lessons learned about evaluation and, eventually, outcomes. At the community level, few grassroots organizations have developed an organizational culture of accountability and evaluation, although personnel may have participated in training or gathered information on the topic. When they are encouraged to do so by their external funders and given coaches to help them do so, eventually, many groups begin to resist the evaluation process, finding it time-consuming and detracting from the heart of their work, which is client service (see, e.g. Schones, Murphy-Berman, & Chambers, 2000). Shorr (1998) observed that such resistance stems from community groups' fears, including: (1) funders will overlook some critical interventions because they cannot be easily measured, (2) program managers will be held accountable for outcomes that are influenced by environmental contexts that are beyond their control, (3) program evaluation cannot capture the greater whole that is hard to measure, and (4) pursuit of measurable outcomes may cause the most vulnerable or hardest-to-reach populations to be ignored in favor of easy targets. As resistance begins, the evaluation planning is often delegated to one or two individuals within the organization, who oversee record keeping, collection of data based on chosen measures, compilation of information, and report development. Gathering stakeholders to interpret information and reflect on lessons learned, a critical step in making evaluation useful and accepted, can be a challenge. Various parties begin to question the cost of the evaluation capacity building process relative to its benefits. When the time comes for data to be compiled and reports produced and released, organizations often discover that the data are incomplete or inappropriate for the purposes of the program. What seemed to be appropriate in the planning process is not quite relevant. For example, the reports may be filled with process descriptions of how the program is evolving and the most important points are sometimes embedded in lines and lines of text. Various stakeholders have needs for different information. In CBOs, staff are likely to prefer long narrative reports that document the lessons learned from the process of program development and client service; they are loathe to omit any details. Program managers need information that can help them work with staff to make quality improvements; they tend to prefer summarizing information that describes patterns across clients and services. Consumers, when they are included in evaluation (which is seldom in spite of encouragement for organizations to do so), prefer brief, bottom-line reports in plain language that describe the kind of people who were served and how they were helped. Executives want positive evaluative information that can be used for marketing and fundraising. Board members are likely to have preferences in common with consumers, except that their bottom line is about numbers served relative to costs. External funders have similar interests with the additional caveat that they want assurance that not only were lives touched, they were actually changed, with succinct information about how those lives were changed. Often, at the release of the first evaluation reports, one or more, if not all, stakeholders are disappointed. The report is too long, the data insufficient, and the actual outcomes relative to intended outcomes questionable. Sometimes, the group realizes that the intended results they chose and the measures for them were not articulated or assessed in a feasible or desirable way. The experiences summarized thus far, from this evaluator's perspective, yield the following lessons learned and suggest the need for early product planning: 1. Participants in CBOs who are learning participative or empowerment evaluation need concrete images of what the process will produce (otherwise they question the benefit of the evaluation process relative to its costs); 2. Early consensus among multiple stakeholders about the specific format and content of evaluation products builds teamwork and reduces tension or conflict; and 3. Clear expectations regarding products facilitate smoother decision making as the evaluation planning and process evolves; for example, selection of measures is easier when the intended statements about program results are clearly known. These observations led to development of an approach to evaluation product planning that is merged with the evaluation planning and implementation process.

نتیجه گیری انگلیسی

Specific evaluation product planning from the start may seem to be an obvious or even essential activity, but it is rarely practiced or described in the literature about evaluation processes. Professional evaluators are likely to possess implicit knowledge or assumptions about what the evaluation products will be or look like. Novice evaluators, those engaged in participative or empowerment processes on behalf of their organizations, may have limited or negative experience with evaluation products. Without a concrete vision of how the evaluation process can ultimately help their cause, they may question the value of the evaluation tasks they are asked to practice. Evaluation product planning provides a concrete focus for the hard work of the evaluation process. My experience with this tool indicates it helps minimize differences among stakeholders, promotes adoption of feasible evaluation plans, clarifies standards for evaluative judgments, and expedites report production. Stakeholder differences are epitomized by the opening example of Francis' passion for client narrative and the funders' insistence on quantifiable reports. When stakeholders collaborate in evaluation planning, the exchange between such disparate views can be seen and felt as an interpersonal conflict, which makes for tense group dynamics. By focusing on the report template, the discourse becomes more objective and less personalized, centered around actual items on the report. Stakeholders can anticipate what the report will say and how it may be interpreted. Since they have a common interest in the image of the program, their shared concerns become the focus, rather than their differences. Empowerment evaluation coaches often deal with feasibility issues when coaching grassroots organizations. Many want to change the world and aim to achieve grand results. What they wish they could do and what they feasibly can do are often quite different. In an evaluation planning process without a template, they are at risk of choosing objectives and measures that may not produce information in the most relevant form. The template helps them realize the concrete challenges of collecting and compiling information. Perhaps the most critical function of template planning is to enable the stakeholders to practice making evaluative judgments. By specifying the standards they will use and visualizing how their conclusions might appear, they are forced to become fairly specific about what processes and criteria they will use to make decisions about the worth of the program. For example, if the funder expects a program to reach a certain number of participants with a certain amount of service each, and the provider expects to serve fewer participants with more in-depth service each, they have an opportunity for pre-program dialogue about these expectations. Presumably, they can come to an understanding and practice guidelines so that a mutually agreed standard can be achieved. Organizational personnel who have used the template tend to love it simply because they know, up front, what the report expectations will be. They like writing sections early, such as the overall outline and the section that describes the program. In grassroots CBOs, some personnel have never produced a report other than a program activity summary that lists numbers served and money spent. Often they have never read a program evaluation report. This may change as the accountability climate for CBOs changes, but it will probably be a long time before personnel move beyond dreading report writing time. The template eases the last minute panic that can occur as deadlines approach. I have encouraged CBO personnel to share their organizational reports with one another, so they can learn from one another's products. In some cases, CBO personnel have told me that they have adapted the product planning process for programs other than the ones for which they were coached, which I regard as a sure indicator that evaluation capacity is being built. For community-based practitioners to routinely ‘Start at the End’ signifies that evaluation is woven into the organizational culture.