دانلود مقاله ISI انگلیسی شماره 1324
ترجمه فارسی عنوان مقاله

روش الگوی متنی: الگوبرداری خدمات دولت الکترونیک

عنوان انگلیسی
The Contextual Benchmark Method: Benchmarking e-Government services
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
1324 2010 7 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Government Information Quarterly, Volume 27, Issue 3, July 2010, Pages 213–219

ترجمه کلمات کلیدی
الگوبرداری - تجزیه و تحلیل متنی - دولت الکترونیکی - خدمات الکترونیکی -
کلمات کلیدی انگلیسی
Benchmarking,CBM,Contextual analysis,e-Government,Electronic services,
پیش نمایش مقاله
پیش نمایش مقاله  روش الگوی متنی: الگوبرداری خدمات دولت الکترونیک

چکیده انگلیسی

This paper offers a new method for benchmarking e-Government services. Government organizations no longer doubt the need to deliver their services on line. Instead, the question that is more relevant is how well the electronic services offered by a particular organization perform in comparison with those offered by others. Benchmarking is currently a popular means of answering that question. The benchmarking of e-Government services has reached a critical stage where, as we argue, simply measuring the number of electronic services is not enough and a more sophisticated approach is needed. This paper details the development of a Contextual Benchmark Method (CBM). The value of CBM is that it is both benchmark- and context-driven.

مقدمه انگلیسی

Government organizations no longer doubt the need to deliver their services on line. Instead, the question that is more relevant is how well the electronic services offered perform, for instance, in comparison with those offered by other (comparable) organizations. Benchmarking is currently a popular means for answering this question (Janssen, Rotthier, & Snijkers, 2004). The Dutch Ministry of Agriculture had the same question and wondered how to set up a solid, practical, and usable benchmarking method to benchmark e-Government services. That the method is solid means that the method should have a reliable foundation; that the method is practical and usable means that the method should be applied in practice easily. The Ministry asked us to help them set up this method to assist them in answering their question. The primary goal of the present study is to develop a benchmarking method and to illustrate this method by means of a pilot study. Benchmarking of e-Government services appeared around the beginning of the twenty-first century (Kaylor, Deshazo, & van Eck, 2001). Bannister (2007) indicates that for the last couple of years at least three benchmark reports have been published per year, which suggests that benchmarking e-Government services has received a great deal of attention. The main goal of benchmarking for government organizations is to improve their electronic services (Aarts, van der Heide, van der Kamp, & Potten, 2005). Improving electronic services should ultimately lead to a higher satisfaction of customers (Dialogic, 2004), as illustrated by Cascadis (2007) (translated from the Dutch): “You can only improve your performance when you now where you are at”. Furthermore, Aarts et al. (2005) mention that the willingness of government organizations to cooperate with one another has increased. This trend provides a positive basis for the application of benchmarking as an approach for improving the performance of services. Janssen et al. (2004) have described the focus of e-Government benchmark studies. By analyzing 18 international studies they came to the following classification terms: information society, e-Government supply, e-Government demand, and e-Government indicators. Kunstelj and Vintar (2004) have also analyzed monitoring, evaluating, and benchmarking studies in the field of e-Government. They came to the following classification terms: e-readiness, back-office, front-office (supply and demand), and effects and impacts. Current e-Government benchmark studies often take a quite simplistic view of government websites and services and draw sweeping conclusions about their performance. For example, benchmarking the percentage of basic public services online (Kerschot & Poté, 2001 and Wauters & Kerschot, 2002). These services are benchmarked by means of identifying the level of online sophistication per service. A similar benchmarking approach can be found in the IDA benchmarking report by Johansson, Aronsson, and Andersson (2001). Kaylor et al., 2001 and Ronaghan, 2002 also benchmarked the level of online sophistication in respectively municipalities and across countries. The latter also included comparing the ICT infrastructure and human capital capacity between 144 UN Member States. While the studies presented above concentrate on the supply-side of e-Government, the benchmarking study of RAND Europe complements these studies by focusing on the demand-side of e-Government. They do so by giving attention to perceptions and barriers, in addition to the availability and usage of e-Government services (Graafland-Essers & Ettedgui, 2003). However, the measured indicators are still quite simplistic. Other e-Government benchmarks that are performed on a regular basis include: the eEurope benchmark by Capgemini, the e-Government leadership reports by Accenture, the Brown University global e-Government survey, and the UNPAN report by the United Nations (Bannister, 2007). The benchmarking of e-Government services has reached a critical stage where, as we argue, simply measuring the number of electronic services is not enough and a more sophisticated approach is needed. This is mainly due to the limitations of current approaches to benchmarking. The major problems of current benchmark approaches are that they are costly and time-consuming (Bannister, 2007 and Anand & Kodali, 2008), quality is poor, and benchmarking is performed as a one-size-fits-all process. In addition, comparisons can become complicated. As Bannister (2007) mentions, there are no rules for a scoring method nor for ranking scales that measure mental states, e.g. attitude to technology. This means that benchmark outcomes vary depending on the context. Bannister continues his enumeration of problems by asking whether a metric and technology are time-invariant and what happens when there is no continued availability of data. Conclusively, Bannister identifies some conceptual issues of benchmarking by stating the following three questions: what is the purpose of the benchmark exercise, what is to be measured, and what type of benchmark is it? In this paper, we describe the Contextual Benchmark Method (CBM). The CBM is a more useful approach to these problems because it is a contextual approach. The overall requirements set for CBM are that it is: • Context-driven — for instance, the method needs to be locally based, on-demand available and self-pacing; and • Benchmark-driven — for instance, well-defined shared procedures, validated techniques and instruments, and reliable data for comparison are used. Clearly, with the CBM we aim to combine the demands of a benchmark with the advantages of research driven by local context. The following sections elaborate on the benchmark and contextual analysis concepts, and present the CBM and explain how it works. The paper ends with a discussion and some conclusions.

نتیجه گیری انگلیسی

As stated previously, CBM is both benchmark-driven and context-driven. CBM allows organizations to initiate a benchmarking exercise without the ‘immediate’ need for reaching a collaborative agreement with potential benchmark-partners. Using CBM, benchmarking could also be done asynchronously. As the database is filled continuously, organizations could conduct a benchmarking exercise when they feel the need for it. All in all, CBM gives the opportunity for organizations to benchmark themselves continuously. CBM can produce high-quality results, because of the predetermined indicators are used. The indicators have all been validated in various academic research studies (e.g. Carter & Bélanger, 2005, Parasuraman et al., 2005 and Wang et al., 2004). Without the use of CBM, benchmarking exercises often need to be performed hastily without proper thought about which indicators to use. CBM's flexibility allows it to be adapted to local needs. When the CBM database is filled, organizations could not only benchmark asynchronously, they could also benchmark on particular aspects of electronic service delivery for their own purposes. Furthermore, in order to actually learn, current performance needs to be benchmarked with past performance. The database used in CBM facilitates this benchmarking over time. Another advantage is that, when the database is populated, organizations could use CBM to conduct other analyses, for instance trend analysis, within a single organization. CBM is already used at the University of Twente in several courses. The feasibility of using CBM is also being explored with a group of local government organizations, consulting firms and our research departments. This innovation group would serve as a platform to use, test, and improve CBM. Because local governments in the Netherlands are starting to implement e-Government services and may want to benchmark their services to continually improve these services there are opportunities to do so. We believe that, by using CBM in different settings, a robust model will be produced for the benchmarking of e-Government services. In conclusion, we have endeavored to address the continuous demand of online service quality improvement by government organizations. We also strive for quality development and trust that we contribute to this by the development of CBM.