دانلود مقاله ISI انگلیسی شماره 1336
ترجمه فارسی عنوان مقاله

الگوبرداری دولت الکترونیکی : مقایسه ی چارچوب هایی برای محاسبات و رتبه بندی شاخص دولت الکترونیکی

عنوان انگلیسی
Benchmarking e-Government: A comparison of frameworks for computing e-Government index and ranking
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
1336 2011 9 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Government Information Quarterly, Volume 28, Issue 3, July 2011, Pages 354–362

ترجمه کلمات کلیدی
- الگوبرداری - شاخص دولت الکترونیکی - رتبه بندی دولت الکترونیکی -
کلمات کلیدی انگلیسی
Benchmarking,e-Government index,e-Government ranking,
پیش نمایش مقاله
پیش نمایش مقاله  الگوبرداری دولت الکترونیکی : مقایسه ی چارچوب هایی برای محاسبات و رتبه بندی شاخص دولت الکترونیکی

چکیده انگلیسی

Countries are often benchmarked and ranked according to economic, human, and technological development. Benchmarking and ranking tools, such as the United Nation's e-Government index (UNDPEPA, 2002), are used by decision makers when devising information and communication policies and allocating resources to implement those policies. Despite their widespread use, current benchmarking and ranking tools have limitations. For instance, they do not differentiate between static websites and highly integrated and interactive portals. In this paper, the strengths and limitations of six frameworks for computing e-Government indexes are assessed using both hypothetical data and data collected from 582 e-Government websites sponsored by 53 African countries. The frameworks compared include West's (2007a) foundational work and several variations designed to address its limitations. The alternative frameworks respond, in part, to the need for continuous assessment and reconsideration of generally recognized and regularly used frameworks.

مقدمه انگلیسی

International organizations, such as the United Nations and the World Bank, regularly undertake significant studies to produce rankings of countries on a wide range of features, including information and communications technology. The benchmarked facets include healthcare (World Health Organization, 2000), education (Dill & Soo, 2005), press freedom (Reporters Without Borders, 2009), corruption and governance (World Bank, 2009), e-readiness (Hanafizadeh, Hanafizadeh, & Khodabakhshi, 2009), e-responsiveness (Gauld, Gray, & McComb, 2009), peace (Institute for Economics and Peace, Economist Intelligence Unit, 2010), happiness (New Economics Foundation, 2009), sports (e.g., FIFA, 2010), and – of primary importance to this paper – e-Government (United Nations, 2003, United Nations, 2004, United Nations, 2005, United Nations, 2008, United Nations, 2010, West, 2007a and UNDPEPA, 2002). The rankings draw on various types of indices, such as the human development index (UNDP, 2009 and Haq, 1995), the e-readiness index (United Nations, 2005), the global peace index (Institute for Economics and Peace, Economist Intelligence Unit, 2010), and the e-Government index (UNDPEPA, 2002). Benchmarking indices and indicators are generally quantitative in nature, and collectively form a framework for assessment and ranking. Some frameworks are based on measurable characteristics of the entities; others use one or more subjective measures; a few employ a combination of both. Frameworks based on grounded and broadly applicable measures tend to attract fewer criticisms. Those based on subjective measures often result in controversies and complaints, especially from those countries or institutions who believe that they were not accurately characterized. To maximize the acceptability of results, rankings should be based on well understood and supported frameworks and indices, and sound computational procedures. e-Government indices are benchmarking and ranking tools that retrospectively measure the achievements of a class of entities, such as government agencies or countries, in the use of technology. Policymakers and researchers use e-Government benchmarking studies to help monitor implementation of e-Government services, using the information to shape their e-Government investments (Heeks, 2006, Osimo and Gareis, 2005 and UNDPEPA, 2002). The results of benchmarking and ranking studies, particularly global projects conducted by international organizations, attract considerable interest from a variety of observers, including governments (ITU, 2009). e-Government benchmarks are used to assess the progress made by an individual country over a period of time, and to compare its growth against other nations. Among the first organizations to propose an e-Government index and rank countries on the basis of their e-Government service delivery was the United Nations Division for Public Economics and Public Administration (UNDPEPA, 2002). The United Nations followed up revisions and other proposals (United Nations, 2003, United Nations, 2004, United Nations, 2005, United Nations, 2008, United Nations, 2010 and UNDPEPA, 2002). Others have also contributed proposals for benchmarking e-Government (West, 2004, West, 2007a, West, 2007b, Bannister, 2007 and Ojo et al., 2007) and e-readiness (United Nations, 2008 and Bakry, 2003). Despite their wide use, the current procedures for computing e-Government indices have significant limitations. For instance, they do not differentiate between websites that provide static information and those that are full-service portals (e.g. highly interactive). Further, the frameworks tend not to account for the stages of e-Government development and whether websites are proportional to the nation's level of development. In this paper, we propose a number of procedures for computing e-Government indices, expanding the current frameworks by introducing techniques that account for the stages of development of e-Government services, as suggested by Al-adawi et al., 2005 and Affisco and Soliman, 2006, and others United Nations, 2010, UNDPEPA, 2002 and Layne and Lee, 2001. As a foundation for our presentation, we review various classification models of e-Government development, then discuss benchmarking generally and in terms of e-Government. The article continues with an overview of the sample data. We then present and compare six separate frameworks for computing e-Government indices, each accounting for slightly different factors. Finally, we offer some conclusions and recommendations for future work.

نتیجه گیری انگلیسی

Benchmarking and rankings are commonly used to determine relative standing and to monitor the progress of entities with respect to a characteristic or achievement goal. For policymakers, benchmarking tools, such as West's e-Government index, serve as information sources and the relative rankings of countries they produce are given a fair amount of attention and importance. To inform sound policy and decision making and to encourage optimal resource allocation, grounded and broadly applicable ranking frameworks are crucial. Some current e-Government ranking and index computation procedures, in particular West's (2007b) e-Government index, do not recognize that e-Government websites evolve over time from static catalogs of information to fully integrated portals. In this article, we contrast six frameworks, designed to account for the websites' e-Government service development. Our results indicate that frameworks assigning weights to websites proportional to their level of e-Government service development (frameworks 2 through 6) present a more accurate picture of e-Government services than frameworks that do otherwise. Under frameworks 2 though 6, countries with websites at a lower level of development, even when more numerous, are not assessed as highly as countries with fewer sites overall but higher levels of e-Government development. Among the preferred frameworks (2 through 6), we believe that framework 6 is superior because it incorporates the strengths of the other frameworks while overcoming their limitations (see Table 6). This last framework produces relative e-Government index values that more fully reflect the features and functionality of e-Government websites. It allows for an easier rescaling to values between 0 and 100 (which is a common practice for most indices). Finally, the highest correlation between e-Government indices computed from our sample data for African countries and the e-readiness index of the countries for 2008 (United Nations, 2008) was achieved using framework 6.The success of any benchmarking study is partly dependent on the availability of relevant data. As long as a country has some governmental presence on the World Wide Web, West's (2007a) mechanisms (framework 1) and others based on this framework (e.g., frameworks 2 through 6 and other Web-based indices) can be applied. These frameworks compute indices based on objective measures compiled and computed with ease and in a relatively short time, even by countries or groups with limited resources. We believe a firm objective basis is one of the strongest components of our frameworks. As for weaknesses, we concede that our analysis does not include every possible framework for benchmarking e-Government service websites and countries; such a task would far exceed the scope of this article. Nor can we claim that the frameworks presented are without weaknesses. First, a number of classifications of stages of e-Government service development exist; the one chosen for our frameworks might prove to be less effective than others. Second, our specific method of assigning weights to e-Government websites proportional to their levels of e-Government service development is but one of many methods that could be used. It may inappropriately assume that consecutive levels of e-Government service development are equidistant (e.g., a jump from level 1 to level 2 is the same as one from level 3 to level 4). Finally, our methods of weighting website features compared to online executable services, while efficacious (at least in the context of framework 6), could be adjusted if a more appropriate approach is discerned. A further limitation of our work stems from the use of point-in-time snap-shot data of e-Government service websites. A longitudinal benchmarking, rather than a one-time look, should provide a better sense of the progress being made by countries in terms of e-Government services (Kaylor et al., 2001). Such a study would also provide a robust dataset that could be used to test the reliability of future benchmarking tools and techniques. Further application and testing of the frameworks is also required in countries other than those in Africa (e.g., EU countries, U. S., OECD members, etc.). Finally, we are mindful that our frameworks may not adequately measure the success of an e-Government service website or platform. Benchmarking evaluations should be extended to include other means of access and/or delivery of e-Government services, such as digital television, mobile technologies, and telecenters. Other approaches, advocated by researchers such as Kunstelj and Vintar (2004), attempt to assess the impact of e-Government on the economy, on social and democratic processes, and on organizations and their work methods. We fully support these more comprehensive approaches, but remain steadfast in our belief that frameworks based on simple, grounded, and broadly applicable measures (such as those presented in this article) serve well as the basis for building more complex frameworks that account for additional factors such as technology adoption and use. Given the widespread use of benchmarking results by policymakers, practitioners, and funding agencies, future work should continue our focus on mitigating the various limitations of frameworks used to compute e-Government indices and to produce rankings. A continuous assessment and reconsideration of e-Government benchmarking frameworks is crucial for sustained improvement. The assessment approach and the alternative frameworks presented here fuel such efforts, helping to ensure that benchmarking systems, and the limitations of those efforts, are well-understood.