سیستم پشتیبانی تصمیم گیری برای ارزیابی قابلیت استفاده از سیستم های اطلاعاتی مبتنی بر وب
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|5517||2011||9 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 6419 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
- تولید محتوا با مقالات ISI برای سایت یا وبلاگ شما
- تولید محتوا با مقالات ISI برای کتاب شما
- تولید محتوا با مقالات ISI برای نشریه یا رسانه شما
پیشنهاد می کنیم کیفیت محتوای سایت خود را با استفاده از منابع علمی، افزایش دهید.
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Expert Systems with Applications, Volume 38, Issue 3, March 2011, Pages 2110–2118
In this study, a decision support system (DSS) for usability assessment and design of web-based information systems (WIS) is proposed. It employs three machine learning methods (support vector machines, neural networks, and decision trees) and a statistical technique (multiple linear regression) to reveal the underlying relationships between the overall WIS usability and its determinative factors. A sensitivity analysis on the predictive models is performed and a new metric, criticality index, is devised to identify the importance ranking of the determinative factors. Checklist items with the highest and the lowest contribution to the usability performance of the WIS are specified by means of the criticality index. The most important usability problems for the WIS are determined with the help of a pseudo-Pareto analysis. A case study through a student information system at Fatih University is carried out to validate the proposed DSS. The proposed DSS can be used to decide which usability problems to focus on so as to improve the usability and quality of WIS.
The web services provided by web-based information systems (WIS) have gained increasing importance in contemporary society. The users of WIS would like to find information in a fast and convenient way. Yet unfortunately, many WIS are still too slow to be usable and cannot satisfy many of their users. Experts from computer science/information science, usability/human–computer interaction, and requirements engineering areas try to solve web-based information system design problems (Yang & Tang, 2003). For measuring their service quality, the ServQual model (Parasuraman, Zeithmal, & Berry, 1988) and its modification for web-based information systems (Li, Tan, & Xıe, 2002) are still the most widely used approaches. ServQual presents a survey instrument which claims to assess the service quality in any type of service organization (Parasuraman et al., 1988). The service quality is determined as the discrepancy between customers’ expectations and perceptions for identifying dimensions that represent the evaluative criteria which customers use to assess service quality (Zeithaml, Parasuraman, & Berry, 1990). ServQual is used by a wide range of users including academicians and practitioners (Mei, Dean, & White, 1999). However, ServQual has also been criticized in some studies (e.g. Babakus and Boller, 1992, Buttle, 1996, Carman, 1990, Cronin and Taylor, 1992, Cronin and Taylor, 1994 and Teas, 1993) because the development of good quality websites requires more sophisticated methods for design and assessment, and this development is fundamentally achieved by usability assessment studies ( Frokjaer et al., 2000, Hornbaek, 2006, Li et al., 2002, Liu et al., 2003, Nikov et al., 2003, Oztekin et al., 2010 and Sauro and Kindlund, 2005). For measuring service quality the ServQual approach with 5-point distance semantic scale (or alternatively 7-point Likert scale) is the primary tool used (Parasuraman et al., 1988). To assess WIS quality, an enhanced version of ServQual, namely the web-based ServQual, with six dimensions measured by 28 checklist questions was developed by Li et al. (2002). However, neither of these approaches proposes a quantitative model to assess WIS quality. Based on ServQual, the WebQual approach evaluates the user perceptions of the quality of WIS (Barnes & Vidgen, 2003). It turns qualitative customer assessments into quantitative metrics for supporting management decision-making. On the other hand, there are many other usability questionnaires/checklists such as Quis (Norman & Shneiderman, 1989), Sumi (Kirakowski & Corbett, 1993), PutQ (Lin, Choong, & Salvendy, 1997), PSSUQ (Lewis, 2002), and UseLearn (Oztekin et al., 2010). Usability refers to the extent to which a product can be used by specified users to achieve specified goals with efficiency, effectiveness, and satisfaction in a specified context of use (ISO 9241-11, 1998). Usability stands for the capability to be used by humans easily and effectively; how easy it is to find, understand and use the information displayed on a website (Keevil, 1998); and quality in use (Bevan, 1999). These definitions reveal that there has been an emerging need for a comprehensive methodology for measuring the usability of web-based information systems by integrating quality- and usability-related measures. It is anticipated that usability and quality do affect each other ( Bevan, 1995 and Bevan, 1999). WebQual significantly tries to include usability dimensions in the assessment process. However, it does not mention the quality and usability measures in detail, and the names for the stated measures seem to be confusing. For example, it calls one dimension usability, but in fact this dimension is a combination of the several dimensions of other checklist approaches. Similarly, the service interaction dimension of WebQual is obviously a mixture of integration of communication (from ServQual) and suitability for individualization (from ISO 9241-10, 1996). Another modified approach based on ServQual is E-S-Qual, which assesses the quality of the websites in terms of profitability for the company (Parasuraman, Zeithmal, & Malhotra, 2005). Considering the fact that most of the usability and quality assessment approaches have many overlapping items in their checklists, recently Oztekin, Nikov, and Zaim (2009) proposed a methodology (UWIS) which includes both quality and usability dimensions. UWIS methodology is an extended form of ServQual that measures the usability of web-based information systems by including the dialog principles for user interface design according to the standard ISO 9241-10 (ISO 9241-10, 1996) and usability heuristics (Nielsen, 1994). UWIS methodology proposes a broader approach applicable to both non-profit and profit-oriented web-based information systems. Due to its broad applicability, the UWIS checklist will also be used in this study. Discussions on how to measure the quality of information systems have gone on for several decades, first in the areas of ergonomics, ease-of-use, and human–computer interaction and later in the area of usability. However, recently discussions recur on which measures of usability are suitable and on how to understand the relation between different measures of usability (Hornbaek, 2006). To increase the meaningfulness and strategic influence of usability data, the entire construct of usability can be presented as a single dependent variable (usability index) without sacrificing precision (Sauro & Kindlund, 2005). The usability index is a measure of how closely the features of a website match generally accepted usability guidelines (Keevil, 1998). This dependent variable can be explained by checklist items, namely the independent variables, and hence their cause-and-effect relationship can be explained. For example: if the button to change the password in a system is not visible enough (i.e. not located at a visible place on the webpage), it would take a long time for a user to find it, select it, and change his/her password. This would decrease the efficiency of the system in terms of speed; hence indirectly the overall usability of the system will also be decreased.
نتیجه گیری انگلیسی
A decision support system (DSS) for evaluating the usability of web-based information systems was proposed and implemented in a university student information system with a sample of 179 students. The case study results showed that the DSS supports the determination of critical usability problems and hence definition of relevant improvement strategies. The most critical usability improvement strategy was detected to be including more optional control buttons in the system and clarifying them further. On the other hand, the least important checklist item shows that the WIS is usable enough in terms of the supply of automated or humane e-mails to the end-users. The study presents uniqueness in that the described decision support system helps select the most important checklist items by considering their contribution to the overall system usability through the criticality index. Additionally, scarce resources for usability experts (e.g. time and money) would be utilized efficiently by taking pseudo-Pareto analysis results into account since they give an idea of where to stop in the usability evaluation and improvement process. These limited resources of the usability experts can be allocated only for critical checklist items which have crucial impacts on usability. Besides, the decision support system defined in this study was not specifically designed to test “student information systems”. On the contrary, it is a flexible DSS which can be implemented to discover the usability-related problems in any system. Future research directions in this field can incorporate the partial least squares (PLS) technique because it provides a clearer explanation for the cause-and-effect relationship among the input variables and the output variable. In addition, it can handle the effects of both the observed/measured and the corresponding latent (unobserved/unmeasured) variables on the overall usability in a step-by-step manner and hence provide more granularity in the analysis. The intended granularity might be required to explain the process clearly to the top managers who take part in the usability improvement decision process. For example, assume that a usability expert might be interested in first determining which of the checklist dimensions –not directly the items – in Table 1 are more critical than the others as a high-level measure (namely reliability, controllability, responsiveness, and so on). Only then s/he might go further in depth to determine the exact cause of the unsatisfactory usability score (for example RL1, C3, RES4, etc.). The concern related to potential nonlinear relationships among the dependent and independent variables that lies in PLS can also be overcome by deploying the nonlinear partial least squares (NLPLS) algorithm which is an extension of the regular PLS with neural networks. This would make the decision support system as powerful as the machine learning techniques (e.g. support vector machines, neural networks) that can capture the nonlinearity while simultaneously explaining the cause-and-effect relationships at a desired granularity as mentioned above.