پیشبرد عملکرد دولت الکترونیکی در ایالات متحده آمریکا از طریق الگوهای قابل استفاده پیشرفته
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|1306||2009||7 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 6140 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
- تولید محتوا با مقالات ISI برای سایت یا وبلاگ شما
- تولید محتوا با مقالات ISI برای کتاب شما
- تولید محتوا با مقالات ISI برای نشریه یا رسانه شما
پیشنهاد می کنیم کیفیت محتوای سایت خود را با استفاده از منابع علمی، افزایش دهید.
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Government Information Quarterly, Volume 26, Issue 1, January 2009, Pages 82–88
Several E-Government website usability studies in the United States employ content analysis as their research methodology. They use either dichotomous measures or a generic scale to construct indexes to conduct comparative reviews. Building on those studies, this article suggests a content-analysis methodology utilizing Guttman-type scales wherever possible to refine usability assessments. This methodology advances E-Government performance through enhanced usability benchmarks to stimulate the organizational dynamics that drive performance improvement.
Several E-Government website usability studies in the United States employ content analysis as their research methodology. They use dichotomous measures to record the absence or presence of selected variables (Gant et al., 2002, Stowers, 2002, West, 2003a, West, 2003b and West, 2006). Constructed indexes rank website classes (i.e., cities) for comparative review. The Holzer and Kim (2004) international assessment raises the bar by introducing a scaling system for some variables (although the New York City website serves as the only U.S. website examined). Their four point scale is generic. It measures the absence or presence of selected variables, the availability of downloadable items, and online governmental interaction capabilities. Building on existing studies, this article suggests a content-analysis methodology utilizing Guttman-type scales wherever possible to refine usability scrutiny. This methodology advances E-Government performance by providing needed “how to” practitioner guidance (Heeks & Bailur, 2006) in enhancing usability benchmarks. Further, it partially responds to Bertot and Jaeger's (2006) call “to improve E-Government for users…” [through] “research into… ‘best practice user-centered design.” The analysis unfolds in five sections. First, it punctuates the importance of E-Government usability as the U.S. endeavors to serve a growing digital majority. Second, it discusses usability dimensions and identifies typical respective variables. Third, the article reviews the theoretical framework for the proposed content-analysis methodology. Fourth, it explains the proposed methodology for benchmarks. And, fifth, the study comments on the limitations and contributions of the research. It concludes by arguing that more robust benchmarks cultivate the organizational dynamics that drive performance improvements.
نتیجه گیری انگلیسی
Research limitations compel acknowledgement prior to concluding comments on this article's contributions. Limitations arise from several quarters. First, the literature lacks consensus regarding standardized operational definitions of E-Government usability variables. Future research could narrow definitions thereby increasing reliability. Second, research concurrence on variable attributes for intensity measurement within scales has not been established because of the developmental state of usability studies. Third, the accessibility accommodations dimension of usability studies seems shallow. It requires more research and development to gauge usability for more segments of the disadvantaged population. Fourth, multiple studies using a particular variable do not mean that it is necessarily a valid measure. Further, repeated use of a variable does not signify that it represents the best means for evaluating usability. Both observations highlight drawbacks to the benchmarking approach. Fifth, the use of benchmarking offers an important perspective for evaluation of website usability. However, the proposed methodology does not suggest an exclusive strategy to improving E-Government performance. Dichotomous and generic scale measurements of E-Government websites remain useful for general comparison. Additionally, in view of the noted limitations, the proposed methodology for enhancing usability benchmarks must be considered carefully. However, this article contributes to E-Government research by arguing that more sophisticated scaling advances website usability assessment. Further, fortified by the use of triangulation to establish common variables, the article proposes enhanced usability benchmarks. This provides decision-makers needed assistance (Kaylor, Deshazo, & Van Eck, 2001) in determining priorities for improving performance of usability dimensions and variables. Application of the methodological approach suggested sets the stage for more robust data collection and data analysis. Furthermore, flexibility is an enduring benefit of the proposed methodology. Modifications to the number of studies reviewed and the agency types scrutinized may be made readily to discern usability performance nuances. Such adjustments provide opportunities to alter the variable universe as usability studies evolve and become more sophisticated. More sophisticated benchmarks spur better-informed efforts toward improvement through content analysis. Two examples demonstrate the proposed methodology's impact on advancing website usability. First, establishing an overall benchmark more robustly identifies further detail for emulation by other public agencies. Such emulation and performance enhancement do not require any new breakthroughs in website usability. Modeling the usability achievement of the overall benchmark makes improvement reachable. Second, public agencies within a study population can benefit from comparative analysis with the more robustly documented benchmarks for each established website usability dimension. Overall usability performance improvement results from public agencies taking successful steps to achieve the usability benchmark dimensional scores. Once again, this performance requires no new breakthroughs in website usability. In the methodological sphere, the research invites attention to the benefits of operationalizing website usability variables and respective attributes in some standardized form. While triangulation affords a sound research strategy for designating common variables, quicker progress may be made in advancing website usability. Principal researchers could catapult website usability measurement forward through the recommended standardization. In conclusion, this article makes a case for more sophisticated benchmarking in E-Government website usability analyses. Rigorous benchmarking engages and further informs the potential for significant improvement in the evolution of E-Government studies and corresponding task performance. This promises to push public performance forward through better E-Government.