دانلود مقاله ISI انگلیسی شماره 7593
ترجمه فارسی عنوان مقاله

تئوری بازی های مبتنی بر مکانیزم شهرت برای تشویق همکاری در شبکه های ad hoc بی سیم

عنوان انگلیسی
A game theory based reputation mechanism to incentivize cooperation in wireless ad hoc networks
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
7593 2010 14 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Ad Hoc Networks, Volume 8, Issue 4, June 2010, Pages 416–429

ترجمه کلمات کلیدی
شبکه های بی سیم - مکانیسم های اعتبار - طرح های تشویقی - اجرای همکاری
کلمات کلیدی انگلیسی
پیش نمایش مقاله
پیش نمایش مقاله  تئوری بازی های مبتنی بر مکانیزم شهرت برای تشویق همکاری در شبکه های ad hoc بی سیم

چکیده انگلیسی

In wireless ad hoc networks one way to incentivize nodes to forward other nodes’ packets is through the use of reputation mechanisms, where cooperation is induced by the threat of partial or total network disconnection if a node acts selfishly. The problem is that packet collisions and interference may make cooperative nodes appear selfish sometimes, generating unnecessary and unwanted punishments. With the use of a simple network model we first study the performance of some proposed reputation strategies and then present a new mechanism called DARWIN (Distributed and Adaptive Reputation mechanism for Wireless ad hoc Networks), where we try to avoid retaliation situations after a node is falsely perceived as selfish to help restore cooperation quickly. Using game theory, we prove that our mechanism is robust to imperfect measurements, is collusion-resistant and can achieve full cooperation among nodes. Simulations are presented to complement our theoretical analysis and evaluate the performance of our algorithm compared to other proposed reputation strategies.

مقدمه انگلیسی

Wireless ad hoc networks consist of a set of self-configuring nodes that do not rely on any infrastructure to communicate among each other. To achieve this goal, a source communicates with a distant destination through intermediate nodes that act as relays. It is usually assumed that in such networks, nodes are willing to cooperate forwarding packets, but this assumption is not necessarily true in the case where all nodes are not under the control of a single authority. In these cases, there can be selfish nodes that want to maximize their own welfare without regard to social welfare, where we define a node’s welfare as the benefit of its actions minus the cost of its actions. In such scenarios, cooperation cannot be taken for granted and therefore, it is necessary to develop mechanisms that allow cooperation to emerge even in the presence of selfish users. Incentive mechanisms can be broadly divided in two categories: credit-exchange systems and reputation-based systems. In credit-exchange systems [2], [3], [4], [5], [6], [7] and [8], cooperation is induced by means of payments received every time a node acts as a relay and forwards a packet, and such credit can later be used by these nodes to encourage others to cooperate. To guarantee that nodes do not counterfeit payments, some strategies rely on the use of tamper-proof hardware to store credit and guarantee the check and balances, but this strategy may hinder their ability to find wide-spread acceptance; other strategies rely on the presence of an off-line central trusted authority which may be hard to guarantee in some scenarios. In reputation-based strategies [9], [10], [11], [12], [13], [14], [15], [16] and [17], a node’s behavior is measured by other nodes in the network. Selfish behavior is then discouraged by the threat of partial or total network disconnection. The problem is that due to interference and collisions it is not always possible to perfectly estimate how a node behaves, so sometimes cooperative nodes are perceived as being selfish and punished accordingly; such scenarios can lead to retaliation situations that may potentially decrease the throughput of cooperative nodes. The contributions of this paper are twofold: first, we use a simple game-theoretic network model to study the robustness of some previously proposed reputation-based strategies. We show that some strategies are not self-enforcing, meaning that there is an incentive to deviate from the expected behavior, while others punish selfish behavior at the expense of the throughput of cooperative nodes, potentially leading to complete network disconnection due to retaliation. Second, we propose a new strategy called DARWIN (Distributed and Adaptive Reputation mechanism for WIreless ad hoc Networks) that effectively detects and punishes selfish behavior. We derive conditions under which no node can gain from deviating from our strategy, prove that full cooperation can emerge among nodes, and that our scheme is collusion-resistant. Simulations are also presented to complement the theoretical contributions. Our results show that the throughput achieved with DARWIN is better than any of the other strategies studied, and that DARWIN can be implemented with low overhead. The rest of the paper is organized as follows. Section 2 introduces some concepts from game theory that are used in this paper. In Section 3 we define the network model which will be used in Section 4 to analyze some of the previously proposed strategies. We introduce our strategy in Section 5, analyze the conditions under which cooperation can emerge, study its performance, and show that it is relatively insensitive to parameter choices. The impact of collusion among nodes is also studied there. Section 6 presents the results of a simulation-based study of DARWIN and how it compares to other reputation-based strategies. Section 7 presents an overview of related work. Finally, Section 8 presents the conclusions.

نتیجه گیری انگلیسی

In this paper we have studied how reputation-based mechanisms can help cooperation emerge among selfish users. We first showed the properties of previously proposed schemes, and with the insight gained from such understanding, we proposed a new mechanism called DARWIN. We showed that DARWIN is robust to imperfect measurements, is also collusion-resistant and is able to achieve full cooperation. We also showed that the algorithm is relatively insensitive to parameter choices. Simulation results complement our theoretical analysis, and show that DARWIN can achieve a higher throughput than any of the other strategies we studied; furthermore, we showed that DARWIN can be implemented with low overhead. It must be noted that in the definition of DARWIN it is assumed that nodes share the perceived dropping probability with each other. This assumption is made in order to facilitate the theoretical analysis by isolating a pair of nodes, but in an implementation a mechanism is required to guarantee that even if a node lies, the reputation scheme still works. To do that we must rely on other cooperative nodes to tell the actual perceived dropping probability of a node in order to minimize the impact of liars, and the average of the received values is the one to be used in this reputation scheme. In [28] and [29] it is proved that if the connectivity of the network is at least 2f+12f+1, then using linear iterations it is possible for all nodes to share some initial values to calculate an arbitrary function on them when there are up to f malicious nodes in the network. In principle, such a scheme can be used in our problem. However, the study of this in the context of wireless networks is an interesting challenge and is a good topic for future research.