یادگیری الگوریتم های ژنتیکی به عنوان یک توضیح حقایق تجربی بازار ارز خارجی
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|14960||2005||28 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 12950 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Mathematical Economics, Volume 41, Issues 1–2, February 2005, Pages 169–196
This paper revisits the Kareken–Wallace model of exchange rate formation in a two-country overlapping generations world. Following the seminal paper by Arifovic [Journal of Political Economy 104 (1996) 510] we investigate a dynamic version of the model in which agents’ decision rules are updated using genetic algorithms. Our main interest is in whether the equilibrium dynamics resulting from this learning process helps to explain the main stylized facts of free-floating exchange rates (unit roots in levels together with fat tails in returns and volatility clustering). Our time series analysis of simulated data indicates that for particular parameterizations, the characteristics of the exchange rate dynamics are, in fact, very similar to those of empirical data. The similarity appears to be quite insensitive with respect to some of the ingredients of the genetic algorithm (i.e. utility-based versus rank-based or tournament selection, binary or real coding). However, appearance or not of realistic time series characteristics depends crucially on the mutation probability (which should be low) and the number of agents (not more than about 1000). With a larger population, this collective learning dynamics looses its realistic appearance and instead exhibits regular periodic oscillations of the agents’ choice variables.
Foreign exchange markets as well as other financial markets are characterized by a number of striking ubiquitous time series features. Most prominently, (log) exchange rates seem to be non-stationary while their first differences are stationary. More precisely, unit root tests are typically unable to reject the null hypothesis of a first-order autoregressive process with a coefficient equal to unity. This finding squares with the well-known result of Meese and Rogoff (1983) that random walk forecasts produce a lower mean-squared error in out-of-sample prediction than reduced-form structural models of macroeconomic fundamentals. It has been argued that these findings can be explained by speculative efficiency of foreign exchange markets, which simply means one interprets the foreign exchange market as an informationally efficient market in the sense of the Efficient Market Hypothesis (cf. Bilson, 1981). While from this perspective the unit root property may not be viewed as a conundrum, other well-known features have defied straightforward explanations until recently. The most pervasive ones are the fat-tail property of relative price changes and the clustering of volatility in these time series. Traces of these features are easily recognizable in all records of high-frequency data (probably up to weekly frequency) of foreign exchange markets (to our knowledge, without any known exception). The fat-tail property implies that the unconditional distribution of daily returns (as well as those of higher and somewhat lower frequency) has more probability mass in the tails and the center than the standard Normal distribution. This also means that extreme changes occur more often than would be expected under the assumption of Normality of relative daily price changes. Volatility clustering means that periods of quiescence and turbulence tend to cluster together. Hence, the volatility (conditional variance) of exchange rate changes is not homogeneous over time, but is itself subject to temporal variation. Explanations of these stylized facts have been elusive until very recently. Perhaps, the silence of economic theory on this issue is not too surprising given that the above regularities are features of time series as a whole and, hence, could only be explained by dynamic models of the evolution of the trading process in the pertinent market. From the viewpoint of informational efficiency, the characteristics of returns would, of course, have to be explained by similar characteristics of the news arrival process, but due to the unobservability of the later, this hypothesis can hardly be subjected to econometric scrutiny. As an alternative, some authors have recently argued that fat tails and clustered volatility can be obtained as a result of interactions of heterogeneous economic agents. Examples of this emergent literature include Lux and Marchesi, 1999 and Lux and Marchesi, 2000, Chen et al. (2001), Kirman and Teyssière (2002), Gaunersdorfer and Hommes (2000), Chiarella and He (2001), Iori (2002) and Bornholdt (2001). Lux and Marchesi, Gaunersdorfer and Hommes, and Chiarella and He have models of fundamentalist—chartist interaction in financial markets which give rise to realistic behavior of the resulting time series (in terms of the above stylized facts). In Lux and Marchesi and Gaunersdorfer and Hommes, the authors try to provide some hints of general mechanisms that could generate these time series properties irrespective of the details of their exemplary models. In the former case, it is a critical behavior of the dynamics in the vicinity of a continuum of equilibria with an indeterminate composition of the population in terms of strategies pursued by individuals. Gaunersdorfer and Hommes get similar dynamics from a model with co-existing attractors in which noise leads to switches between different states. Still different mechanisms prevail in Iori (2002) and Bornholdt (2001) who use lattice-based structures for modeling the interactions among traders. Interestingly, a recent paper by Arifovic and Gencay (2000) on an artificial currency market with genetic learning of strategies also suggests emergence of realistic features of the resulting exchange rate dynamics (cf. Fig. 1). However, they do not provide a detailed analysis concerning the above properties. One of the aims of this paper is to fill this gap. In particular, we will try to quantitatively assess the degree of fat tailedness and volatility clustering this model generates. We are also interested in the sensitivity of these quantitative measures with respect to key parameters of the model. To get an impression of the sensitivity with respect to parameter variations, we will try to figure out how the time series properties depend on the genetic algorithm parameters and the number of agents populating the market (as will probably become clear in the presentation of the model, the values of the few economic variables of the model are less important in this respect). We then relate our findings to those obtained for other models of artificial financial markets and try to provide an explanation for the crucial importance of the number of individuals for the qualitative outcome of the model. Full-size image (44 K) Fig. 1. A typical ‘realistic’ series of returns from a simulated economy with a binary-coded GA population of 100 agents. For economic parameters, see main text. GA parameters are: pmut = 0.01 and pcross = 0.6. Figure options When investigating the sensitivity of the results, we are particularly interested in (i) the sensitivity with respect to the details of the learning dynamics, and (ii) the influence of the number of agents. Sensitivity with respect to the GA design is investigated by using different implementations of the typical reproduction, cross-over and selection operators, varying the parameters used for these operators, and applying real-coded GAs besides the traditional binary coded ones. As far as we can see, this is the first application of real-coded GAs in economics. Since there are no rigorous guidelines for the choice of one particular form of encoding, such a sensitivity analysis gives some insight into the generality of results obtained from GA learning and, in fact, it turned out that results were quite robust with respect to the coding scheme. Our interest in the effects of the size of the market derives from some puzzling earlier findings. Namely, a number of studies have revealed that existing multi-agent models of financial markets loose their realistic time series properties when increasing the number of agents ( Egenter et al., 1999, Yeh, 2001 and Challet and Marsili, 2002). Since published work on artificial markets with GA learning has used only a very limited number of agents, typically below 100, it seems worthwhile to explore the behavior of larger economies. Apart from our interest in explaining stylized facts, the present paper also fits into the broader framework of evolutionary models of financial markets. Like most of the above references, we consider adaptive choice of strategies by our agents governed by some kind of fitness criterion. Here fitness is determined by the utility obtained by agents pursuing a certain strategy. Alternative fitness criteria in the literature include the performance of some predictors on which a trading strategy depends or its (short-term) profitability. Typical examples of this literature are the ‘adaptive belief systems’ of Brock and Hommes (1997) and Gaunersdorfer and Hommes (2000). However, our set-up differs from theirs in various aspects: (i) instead of an isolated financial market we consider a two-country general equilibrium framework, (ii) instead of modeling the evolutionary strategy choice via a discrete choice framework we apply genetic algorithms as an evolutionary algorithm for behavioral adaptations, and (iii) we allow for as many strategies as can be formulated with a given GA design (i.e. a large number or continuum, see below) instead of focusing on typical examples like chartist–fundamentalist interaction. These differences notwithstanding, our aim of investigating double limits of large populations operating within a large strategy space, is, in fact, very similar to the analysis of large type limits in Brock et al. (2003). Another related strand of evolutionary finance literature looks at the development of wealth attained by different competing strategies instead of allowing agents to switch between strategies based on a comparison of their performance in the short or medium run. Originating from Blume and Easley’s (1992) paper, the ensuing literature on the ‘market selection hypothesis’ has come out with some stark results: in particular, it has been shown that in a conventional financial market with many agents, the highest growth in wealth is realized by agents whose saving rate is as least as high as the market saving rate and whose investment shares in different assets are equal to the expected relative pay-offs of these assets. Under uncertain subjective beliefs about pay-offs, Bayesian learners dominate other types of adaptive behavior (Blume and Easley, 1992). While Blume and Easley had shown the dominance of the ‘betting your belief’ strategy in the case of Arrow securities, Hens and Schenk-Hoppé (2003) have been able to generalize this result for arbitrary pay-off structures. They also show that CAPM is able to mimic the optimal portfolio rule. The result on the dominance of Bayesian learners is sharpened by Sandroni (2003), who shows that Bayesian learning with a uniform prior dominates maximum likelihood models when available information is limited. In contrast to the first strand of literature, the ‘market selection hypothesis’ deals with the success of strategies, not agents. However, survey studies show that investors often switch between different strategies and that their choice of trading strategy is governed by short-run success criteria (short termism). It would, therefore, be worthwhile to consider the endogenous development of agents’ wealth in models allowing for switching between strategies. A first attempt at combining wealth dynamics and switching between strategies is provided by Chiarella and He (2002). The rest of the paper proceeds in the following steps: Section 2 will introduce the underlying model of the foreign exchange market, the well-known Kareken–Wallace two-country overlapping generations model. Section 3 gives details on the genetic algorithms which we apply to model the learning of our agents. In Section 4, we review the statistics used for assessing how realistic the model’s output is. Section 5 presents the results of extensive Monte Carlo work, and Section 6 tries to provide an explanation for the surprising behavior that we find in the case of a very large population. Section 7 concludes.
نتیجه گیری انگلیسی
Elaborating on the GA version of the Kareken–Wallace model introduced by Arifovic (1996) and Arifovic and Gencay (2000), we have analyzed both the potential and the limitations for this type of artificial open economy to generate realistic time series properties. As it turns out, the model can generate time series which very closely mimic the statistical characteristics of empirical data. The mechanism responsible for the emergence of these interesting dynamics seems to be similar to the one analyzed within a different context by Lux and Marchesi (1999): the model has a continuum of equilibria with an indeterminate distribution of strategies among agents (as has been argued above, any distribution of the fi would be admissible in equilibrium). With the stochasticity of the genetic process, there will always be distortions preventing the system from settling at any particular equilibrium. Because of the evolutionary instability of any distribution of strategies these random distortions will evoke self-amplifying tendencies which produce large price changes (fat tails) and volatility clustering. However, we also find that a small probability of mutation and a small number of agents are needed to get this realistic output for the exchange rate. 9 With a large population, the destabilizing tendencies are so strong that the crucial choice variable, fi, bounces back and forth between the corners of the admissible parameter space. This applies to both binary and real-coded GAs. While the requirement of small mutation rates might be considered to be plausible and not too restrictive, having to restrict the population size to numbers below, say, N = 1000 is much more cumbersome. Real markets (in particular, the world-wide market for foreign exchange), surely have more participants so that N < 1000 seems an unrealistic requirement. However, this disappointing finding is shared by other multi-agent models (cf. Egenter et al., 1999, Yeh, 2001 and Challet and Marsili, 2002). Essentially, with high N, a law of large numbers becomes effective even in models with a large number of available strategies and the randomness from the interaction between the microscopic choices of strategies vanishes. While in certain models, prices converge to fundamental values in the large economy limit ( Egenter et al., 1999), the absence of fundamentals in the Kareken–Wallace model appears to be responsible for the oscillations between extreme choices. How could one overcome these uncomfortable findings and save the ‘nice’ results obtained with smaller populations? One possibility would be to allow for more coherence among individuals via social sharing of information. Allowing for groups of agents to form, we would get a smaller effective number of agents. As an alternative, endogenous development of wealth could lead to some agents exerting more influence on the market outcome than others (of course, this feature would be particularly difficult to incorporate into the present simple model). This would presumable also change the outcome in a way that differs from the atomistic case analyzed above. 10 Exploring these avenues is left for future research.