دانلود مقاله ISI انگلیسی شماره 7595
ترجمه فارسی عنوان مقاله

رویکرد مبتنی بر عامل مجهز به نظریه بازی ها: همکاری استراتژیک بین عوامل یادگیری در طی تغییر بازار پویا در بحران برق کالیفرنیا

عنوان انگلیسی
An agent-based approach equipped with game theory: Strategic collaboration among learning agents during a dynamic market change in the California electricity crisis
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
7595 2010 16 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Energy Economics, Volume 32, Issue 5, September 2010, Pages 1009–1024

ترجمه کلمات کلیدی
رویکرد مبتنی بر عامل - بازار برق - یادگیری تقویت جزئی -
کلمات کلیدی انگلیسی
Agent-based approach,Electricity market,Partial reinforcement learning,
پیش نمایش مقاله
پیش نمایش مقاله  رویکرد مبتنی بر عامل مجهز به نظریه بازی ها: همکاری استراتژیک بین عوامل یادگیری در طی تغییر بازار پویا در بحران برق کالیفرنیا

چکیده انگلیسی

An agent-based approach is a numerical (computer-intensive) method to explore the complex characteristics and dynamics of microeconomics. Using the agent-based approach, this study investigates the learning speed of traders and their strategic collaboration in a dynamic market change of electricity. An example of such a market change can be found in the California electricity crisis (2000–2001). This study incorporates the concept of partial reinforcement learning into trading agents and finds that they have two learning components: learning from a dynamic market change and learning from collaboration with other traders. The learning speed of traders becomes slow when a large fluctuation occurs in the power exchange market. The learning speed depends upon the type of traders, their learning capabilities and the fluctuation of market fundamentals. The degree of collaboration among traders gradually reduces during the electricity crisis. The strategic collaboration among traders is examined by a large simulator equipped with multiple learning capabilities.

مقدمه انگلیسی

Game theory has been long serving as a basis of applied mathematics that is used in social sciences (in particular, in economics). The important feature of game theory is that it attempts to mathematically describe human's behavior in a competitive situation where the payoff of a player (Player I) in making a choice depends upon the choice of the other player (Player II). Originally, the game theory was developed to analyze different types of competition where a player performs better at the other expense. The theory is further extended to treat a wide range of interactions among players. Recently, the game theory serves as a conceptual and mathematical foundation to explore a rational side of human decision making (Camerer, 1997). Game theory has performed an important role in computer science, as well. Researchers in computer science have used it to model interactive computations in complex analysis. In particular, the game theory provides a theoretical basis in the field of a multi-agent system where many different types of software agents direct themselves for their own purposes (e.g., a reward). An agent-based approach, incorporating the conceptual framework of game theory, has been applied to investigate various complex business systems. For example, Samuelson (2005) discussed applications of the agent-based approach to investigate business complex systems. Makowski et al. (2005) assembled seventeen articles, all of which discussed various linkages between the agent-based approach and business complex systems from the perspective of optimization. An important feature of the agent-based approach in business applications is its role in modeling and simulation. The structure of a complex system is modeled with different types of agents equipped with multiple learning capabilities. Their learning processes are usually characterized by adaptive behaviors through which agents determine their decisions by interacting with an environment. The proposed model is numerically expressed and examined by a large simulation study. The incorporation of a problem structure into a modeling process, along with a simulation study, provides us with a new type of numerical capability to handle a large social complex system where many components have interactions among them. Consequently, the agent-based approach is gradually recognized as a new promising approach among researchers in social sciences. The agent-based approach has been recently applied to investigate a dynamic change of wholesale power trading. For example, Bagnall, 2000, Bunn and Oliveira, 2001, Jacobs, 1997 and Morikiyo and Goto, 2004 developed multi-agent adaptive systems that incorporated a dynamic bidding process in power trading1. In addition to such previous studies, Sueyoshi and Tadiparthi, 2005, Sueyoshi and Tadiparthi, 2007, Sueyoshi and Tadiparthi, 2008a, Sueyoshi and Tadiparthi, 2008b, Sueyoshi and Tadiparthi, 2008c and Sueyoshi, 2010 proposed a use of the agent-based approach that incorporated reinforcement learning and its related learning algorithm into various agents for power trading. These studies applied the agent-based approach to investigate a dynamic change of bidding strategies in US wholesale power exchange markets. The first study (Sueyoshi and Tadiparthi, 2005) discussed how to incorporate self-learning capabilities into the agent-based approach. The second research (Sueyoshi and Tadiparthi, 2007) extended the first study by incorporating two groups of adaptive behaviors into agents. One of the two groups incorporated multiple learning capabilities into agents. The other group incorporated limited learning capability into agents. To extend the applicability of the proposed approach further, the third study (Sueyoshi and Tadiparthi, 2008a) considered various influences of a transmission line limit on the wholesale price of electricity. As a result of incorporating the line limit between power exchange market zones, the third study could enhance the practicality and applicability of the proposed approach. The fourth study (Sueyoshi and Tadiparthi, 2008b) also developed an agent-based simulator, referred to “Multi-Agent Intelligent Simulator (MAIS),” based upon the adaptive behaviors and algorithms explored in the previous studies. As an important application of MAIS, the fifth research (Sueyoshi and Tadiparthi, 2008c) applied the MAIS to investigate reasons regarding why the California electricity crisis occurred in wholesale power trading. Finally, Sueyoshi (2010) compared the computational results of MAIS with those of well-known economic studies on the California electricity crisis (i.e., Borenstein et al., 2002 and Joskow and Kahn, 2002). Differences between this study and the previous studies: All the previous studies opened up a new numerical method to investigate the dynamic change of various business complex systems such as bidding strategies in power exchange markets. That was an important contribution, indeed. However, no previous study investigated how trading agents collaborated with each other in a power exchange market and how the cooperation influenced their learning speeds. The previous studies implicitly assumed that each agent did not have any communication capability to interact with other agents. Each agent attempted to maximize only his reward obtained from a power exchange market without any interaction with other agents. However, such an assumption is often unrealistic in investigating the dynamic change of a power exchange market because real traders always communicate each other in their bidding process. Therefore, the purpose of this study is to investigate the strategic collaboration among agents and its influence on their learning speed in the dynamic change of a power exchange market. This type of research was not sufficiently explored in all the previous studies that applied the agent-based approach to examine the dynamic change of power trading. To attain the research objective, this study restructures the agent-based approach by “partial reinforcement learning” (Bereby-Meyer and Roth, 2006: BM&R, hereafter). The reinforcement learning, incorporated in the previous studies, assumed that the learning of an adaptive agent always occurred in all trading experiences. Meanwhile, the partial reinforcement learning extends the concept by incorporating both a learning speed of agents and a limited number of their learning chances to obtain rewards. For example, the learning behavior of a trading agent becomes slow under an occurrence of a market fluctuation. Hence, the learning speed of an agent is important in investigating his adaptive behavior. Furthermore, the sustainability of collaboration among agents becomes important in investigating the bidding strategy of adaptive agents. Moreover, an agent cannot always win in a power exchange market. In some trades, he may win and obtain a reward. But, he often loses in the other trades. That is the reality of power trading. Thus, the adaptive behavior of an agent in power trading is characterized by the partial reinforcement learning, not reinforcement learning used in the previous studies. As a result of such an extension to the partial reinforcement learning, the agent-based approach proposed in this study may express more closely the reality of power trading than the previous studies. The remaining structure of this article is organized as follows: The next section reviews underlying economic concepts and hypotheses examined in this study. Section 3 describes the California wholesale power exchange market. This study is interested in examining the collaborative behavior of power traders during the California electricity crisis because the market has drastically fluctuated during the crisis period. The electricity crisis is a good example of such a dynamic market change. Section 4 discusses the adaptive behavior of trading agents and their algorithms incorporated in the proposed simulator, or MAIS. Section 5 applies the proposed simulator to a data set related to the California power exchange market before and during the crisis period. This study examines the economic assertions related to the learning speed of agents and cooperation among them. Section 6 summarizes this research along with future extensions.

نتیجه گیری انگلیسی

The previous work of BM&R (2006) explored the psychological aspect of human's game-based learning in economics. Their study examined various assertions related to the partial reinforcement learning of human decision making. Their claims are valid and useful in the investigation of economic activities in games. To document how their claims are practical from the perspective of computer science, this study has numerically examined the collective behaviors of traders before and during the California electricity crisis. The numerical findings obtained from the proposed simulator are summarized in the following manner. First, the learning speed of agents was influenced by a dynamic change of market fundamentals. The learning speed of all agents during the crisis became slower than the pre-crisis. The adaptive behavior of Type II was slower than that of Type I because Type I was more adaptive than Type II. Second, large agents in Type I adjusted slowly to a market change of electricity because they were market powers in an electricity exchange market. Large agents in Type II also adjusted slowly to the market change, but the adjustment rate of agents in Type II is less than that of Type I. Third, generators equipped with Type I learning capabilities could improve their winning probabilities by collaborating with other generators during the crisis. The two collaboration strategies (A & B) were more effective than the individual strategy (implying no collaboration) in terms of enhancing their winning probabilities. Furthermore, the collaboration strategy A was more effective for generators than the strategy B. In contrast, the collaboration strategy B was more effective for wholesalers than the strategy A. Finally, all agents learned to collaborate more slowly during the crisis. Agents learned to collaborate in the early period of the supergame, but their collaboration gradually broken down along with the supergame. The dismemberment occurred on wholesalers more seriously than generators. Most of wholesalers did not stay in a same team along with the repeated supergame. The average existence probability of Strategy B was less than that of Strategy A because agents in Strategy B needed to more carefully collaborate with others than Strategy A. This study is methodologically not perfect. The research has several drawbacks that need to be overcome in future. As future extensions, we need to mention the three research agendas: First, this study used Types I and II adaptive behaviors. Agents in the two groups do not exhaust all possibilities regarding the adaptive behaviors of traders. There are many types of adaptive behaviors which are different from Types I and II. This study does not examine such robustness for every possible alternative set of behavioral assumptions. It can be imagined that the results obtained from this study are sensitive to the model selection for agents. Therefore, we need to investigate further other behavioral models for power trading agents. Second, this study does not sufficiently explore all possible collaboration strategies among agents. We considered only Strategies A and B. Hence, we need to explore the strategic collaboration under current deregulation policies on electric utilities. That is another important future extension of this study. Third, the empirical results obtained in this study are limited on the data set related to the California electricity crisis. Therefore, we need to extend it by applying the proposed approach to other data sets such as CO2 emission trade. Such a research effort will enhance the reputation and practicality of the proposed agent-based approach that can explore various aspects of business complexity (e.g., traders' bidding strategies and their collaboration). Finally, it is hoped that this study makes a contribution in the field of agent-based approach for whole sale power trade. We look forward to seeing future extensions as discussed in this study.