اعتماد من: بررسی اعتماد بین کاربران و ماموران در یک سیستم مدیریت پرتفولیو چند عاملی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|21563||2003||13 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Electronic Commerce Research and Applications, Volume 2, Issue 4, Winter 2003, Pages 302–314
There is considerable research investigating trust among agents in multi-agent systems. However, the issue of trust between agents and users has rarely been reported in the literature. In this paper, we describe our experiences with I-TRUST, where trust is introduced as a relationship between clients and broker agents in terms of the amount of money clients are willing to give to these agents to invest on their behalf. The goals of broker agents are not only to maximize total revenue subject to clients’ risk preference, but also to reinforce trust with their clients. To achieve this, broker agents must first elicit user models both explicitly through questionnaires and implicitly through three games. Then based on the initial user models, a broker agent will learn to invest and later update user models when necessary. From the three experiments conducted in this study, we found that the controllability of a client towards the autonomous system plays a significant role for trust building.
Trust is multi-dimensional which concerns many different attributes such as reliability, dependability, security, honesty, competence, etc., which may have to be addressed based upon the environment where it is specified . It has been studied extensively in multi-agent systems , , , ,  and , where it is the ‘attitudeanagenthaswithrespecttothedependability/capabilitiesofsomeotheragent (maybeitself)’ . However, dealing with trust between humans and agents in a multi-agent e-commerce system is also important because in order to attract more clients to take advantage of the services they provide, winning the trust of their clients becomes central. As Kini and Choobinch put it, trust in a system involves ‘a belief that is influenced by the individual’s opinion about certain critical system features’ . This definition highlights human trust towards agents in electronic commerce, which motivates our study in this paper. In particular, we address the issue of the degree of trust a client has towards his/her broker agent to invest on his/her behalf: how can a client be sure that the broker agent will make a sound judgment based upon the risk–return preference of him/her. To describe our approach, let us first look at the following scenario.
نتیجه گیری انگلیسی
7.1. Trust models in I-TRUST and MoTEC I-TRUST is different from the trust model designed for a B2C domain, namely the MoTEC model . According to Ref. , the MoTEC model is most suitable for one-shot consumer–vendor behavior in that various human factors affecting trust in the system contributed to a relatively short-term trust relationship. MoTEC trust components include pre-purchase knowledge (vendor’s reputation), interface properties (familiarity, usability of the interface) and information contents (for example, ‘transparency of the vendor’s openness with respect to business policies’). It is obvious that this kind of trust-facilitating user interface might affect clients’ trust towards the services that a vendor provides in a relatively short-run. In the long run, however, the most enduring trust relationships come from the quality of the services (for example, in our domain, how reliable and profitable broker agents’ portfolio management strategies are, not how the interface would affect the clients). Thus, what we are concerned with is the quality of the services vendors can provide to their clients rather than the merely selling of the interface in order to gain a relatively long-term trust building between clients and vendors. Because in our view, it is imperative for Internet vendors to be aware and equipped with suitable techniques to attract users and build a long-term trust relationship with them. 7.2. Trust and reputation We found that one possible way of promoting trust in case when agents would have to consider their own interest is to set up and maintain a reputation database similar to eBay (called Feedback Forum), where both buyers’ and sellers’ past behavior mostly in terms of aggregate feedback could be stored. So when users are wondering whom they might intend to trust more, they can browse the database themselves. From another perspective, it is in essence a ‘word-of-mouth’-based explicit reputation system, which is shown to be effective in fostering trust among strange customers, who, virtually exchange both positive and negative experiences with the services or products Internet vendors provide . Since we made an assumption in this study that each broker agent can only represent one client, therefore, it seems not necessary for users to exchange their feedback towards the agents. As our future study when we are going to investigate agents’ competition to act on their own interest, and each agent can be allowed to serve the needs of more than one client, this reputation-based mechanism might work well. In this case, users’ initial trust towards agents might be based on agents’ reputation. 7.3. Dimension of human-system trust In the context of this paper, trust is concerned with two dimensions: dependability and competence. Another dimension of trust, i.e. reliability, is made as an assumption for the system. However, there exist many other aspects, which might contribute to trust building. For example, according to Ref. , frequency of use, and users’ knowledge also have an impact on trust building. In addition, we believe that users’ controllability towards the autonomous system is also of great importance. Therefore, in I-TRUST, it is observed that users’ participation in the entire process (joint-decision making with their agents), as well as their autonomy to delegate their agents, are necessary to accelerate trust building. Furthermore, it is noticed that users tend to assess the cost/loss of trust towards their agents on a situation-by-situation basis, as argued in Ref. . In particular, trust varies with the situation where users’ own knowledge, current market information etc. are assessed. It is similar to the study by Muir in Refs.  and  where Muir proposed that trust is dynamic, and users’ trust towards the system would change based on their own experiences with the system. 7.4. Trust and punishment One important issue associated with trust is punishment. In I-TRUST, clients can punish their broker agents by reducing the amount of money delegated to agents to invest in the next investment period. From the agents’ perspective, they will try to avoid this by not only investing to the best of their expertise, but also considering their clients’ risk preferences. For example, in one specific investment period, agents might consider buying $5000-worth of stock A after a careful estimation of market information, however, his/her client might disagree and suggest buying only $2000. In this case, broker agents must sacrifice his/her expertise and follow, to some degree, his/her client’s advice, and thus, invest, say, $2500-worth of stock A. Therefore, broker agents might try to balance their investing expertise and feedback from his/her client. Results of the third experiments show that broker agents struggle to locate an optimal tradeoff to retain clients’ trust.