آزمون کارایی شکل ضعیف در بورس اوراق بهادار تورنتو
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
13252 | 2011 | 31 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Empirical Finance, Volume 18, Issue 4, September 2011, Pages 661–691
چکیده انگلیسی
In order to test for weak form efficiency in the market a vast pool of individual stocks must be analyzed rather than a stock market index. In this paper, a model-based bootstrap is used to generate a series of simulated trials and a modified chart pattern recognition algorithm is applied to all stocks listed on the Toronto Stock Exchange (TSX). The number of patterns detected in the original price series is compared with the number of patterns found in the simulated series. By simulating the price path specific time dependencies present in real data are eliminated, making price changes purely random. Patterns, if consistently identified, carry information which adds value to the investment process, however, this informativeness does not guarantee profitability. Conclusions are drawn on the relative efficiency of some sectors of the economy. Although the null hypothesis of weak form efficiency on the TSX cannot be rejected, some sectors of the Canadian economy appear to be less efficient than others. In addition, pattern frequencies appear to be negatively dependent on the two moments of return distributions, variance and kurtosis.
مقدمه انگلیسی
Technical analysis is a financial market technique that claims the ability to forecast the future direction of security prices through the study of past market data, primarily price and volume. Technical analysts may employ models and trading rules based on price transformations, moving averages, regressions, inter-market and intra-market price correlations, cycles or recognition of chart patterns. The patterns in market prices are assumed to recur, and thus, these patterns can be used to predict future price movements. Critics argue that these patterns are simply random effects on which analysts impose causation, and bear no useful information, especially in the long term. Nonetheless, about 30 to 40% of practitioners appear to believe that technical analysis is useful in exploiting the information created by price movements in short time horizons of up to six months.1Taylor and Allen (1992), based on a survey among foreign exchange dealers in London, found that at least 90% of respondents place some weight on technical analysis. In addition, the results of this survey revealed preference for technical, rather than fundamental, analysis at shorter time horizons. Lui and Mole (1998) report the results of a similar survey conducted in 1995 among foreign exchange dealers in Hong Kong. They found that over 85% of respondents rely on both fundamental and technical analyses and, again, technical analysis was more popular at shorter time horizons. As pointed out by Grossman and Stiglitz (1980) in their discussion of a fully revealing rational expectations equilibrium, the work of technical analysts is essential in capitalizing into current market prices the information possibly hiding behind patterns in price movements. The paradox is that in detecting these patterns and trading on this information they destroy these patterns and make the market weak form efficient where no patterns may be profitably exploited.2 Put another way, if the market were weak form efficient, technical analysis would be a complete waste of time and money (Grossman and Stiglitz (1980)), since all possible patterns would have been identified and exploited leaving nothing but white noise. But if no one performs technical analysis, the information contained in historical patterns will not be capitalized into the market price thus giving rise to profit opportunities for those who discover this information first. Thus, it is interesting to investigate whether patterns exist in the market as a whole or in certain understudied portions of it. The Efficient Market Hypothesis is one of the most important and widely disputed propositions in finance. The claim is that prices fully reflect all available information in the market and any forecasting of future price changes therefore is purely speculative. There is what Lo and MacKinlay (Lo and MacKinlay (1999), p. 4) call “a wonderfully counter-intuitive and seemingly contradictory flavor” to the idea of informationally efficient markets: the greater the number of participants, the better their training and knowledge and the faster the dissemination of information, the more efficient a market should be; and “the more efficient the market, the more random the sequence of price changes generated by such a market, and the most efficient market of all is one in which price changes are completely random and unpredictable”. Park and Irwin (2007) in a comprehensive survey of technical analysis papers split the empirical literature into early (1960–1987) and modern (1988–2004). This classification is done on the basis of the number of technical trading systems used, the attention paid to transaction costs, risk factors, data mining issues, parameter optimization, verification of findings with out-of-sample data, and the statistical tests used in the analysis. Modern studies are then split into seven sub-groups based on differences in testing procedures. Park and Irwin, 2004 and Park and Irwin, 2007 explain that standard studies include parameter optimization and out-of-sample tests, adjustment for transaction costs and risk, and statistical tests. In model-based bootstrap studies researchers conduct statistical tests on returns from trading using the model-based bootstrap approach first discussed in Brock et al. (1992). Reality check and genetic programming studies include papers that attempt to solve data-mining problems by using the bootstrap reality check method introduced by White, 2000 and Koza, 1992 genetic programming technique. Non-linear studies apply non-linear techniques such as feed-forward neural networks or nearest neighbor regressions to identify patterns in price time series or estimate the profitability of technical trading rules. Chart pattern studies such as those published by Lo et al., 2000 and Dawson and Steeley, 2003 develop and apply recognition algorithms to chart patterns. The last category, other studies do not fit easily into any of the other categories described above. In general, the early studies revealed limited evidence of the profitability of technical trading rules with stock market data and thus supported weak form market efficiency. In contrast, Park and Irwin (2007) in their survey of 95 modern studies find that technical trading strategies are profitable (a sign of weak form inefficiency) in 56 cases and not profitable (a sign of weak form efficiency) or ambiguous in 39 cases. This weakens the case for weak form efficiency 3. Our own work hopes to supplement the findings of previous scholars using data from a different stock market (the Toronto Stock Exchange) and blending techniques from Brock et al., 1992 and Lo et al., 2000. Brock et al. (1992), in a pioneering paper which uses a very long time series of historical prices (1897 to 1986), apply the model-based bootstrap methodology to obtain simulated data with the same statistical characteristics as the actual data. This allows the authors to draw statistical inferences on the profitability of various trading rules. Using widely recognized chart patterns they compare the returns obtained from the buy and sell signals in the actual price time series to the returns from the simulated price time series. Their results show that buy (sell) signals from the technical trading rules generate positive (negative) returns across all 26 rules and four sub-periods tested. All the buy–sell differences in returns are positive and outperform the returns generated by the simple buy-and-hold strategy. We must note, however, that their results have been challenged by Sullivan et al. (1999) who argue that these results are an artifact of data mining. Sullivan et al. (1999) argue that trading rules are subject to selection bias where only those that perform well continue to be examined even though they may be only a small subset of all trading rules available. The paper by Lo et al. (2000) is one of the first papers to automate the process of chart pattern recognition. These authors study 10 reversal patterns based on a set of consecutive extrema points that trace the particular geometrical form corresponding to each of these 10 patterns. Lo et al. (2000) apply this methodology to a large set of individual stocks traded on the NYSE/AMEX and NASDAQ over the 1962–1996 period as well as the market indices on these U.S. exchanges. The authors perform a goodness-of-fit test to compare the quantiles of returns generated by these technical patterns with returns over the whole period of their study (and thus unaffected by these patterns). Dawson and Steeley (2003) replicate and extend the work of Lo et al. (2000) using UK stock market data and the same set of technical patterns. Whereas Lo et al. (2000) find more patterns in market data than in their simulated data, Dawson and Steeley (2003) find the opposite, although the frequencies of occurrence of these patterns is the same in the UK market as those found by Lo et al. (2000) in the US markets. As we show below, the results of our study which uses data from a third market (the Toronto Stock Exchange) support the findings of Dawson and Steeley (2003). These authors also follow Lo et al. (2000) in investigating the returns distributions conditioned on these technical patterns and find, as do Lo et al. (2000), that they are significantly different from the unconditioned returns distributions. However, the means of the conditioned and the unconditioned returns distributions are not significantly different from each other, although the distributions are statistically different from each other. We hypothesize that this may be due to differences in higher moments of these distributions. Later in this paper we investigate further the conditioned and unconditioned return distributions in our Canadian data. Testing for weak form efficiency using indices (e.g. Brock et al. (1992)) is problematic as the indices themselves are not traded in the spot market. We believe that in order to test for weak form efficiency in the market a vast pool of individual stocks must be analyzed rather than a stock market index. In this paper, we use a model-based bootstrap to generate a series of simulated trials and apply a modified chart pattern recognition algorithm to stocks listed on the Toronto Stock Exchange (TSX), Canada's largest stock market. Specifically, we generate a number of random price time series with the same distribution characteristics as the underlying asset in order to eliminate any time dependencies present in the real data. We find the number of reversal patterns in the simulated data. Patterns, if consistently identified, carry information which adds value to the investment process, however, this informativeness does not guarantee profitability. We compare the results from the actual and the simulated series to draw inferences on weak form efficiency in the overall Canadian market and in its sectors. The weak form efficiency hypothesis will be rejected when we find a significantly larger number of reversal patterns in the real price series than in the simulated series. On the other hand, if the number of patterns identified in the real time series is the same (or fewer) as in the simulated time series, then technical analysis cannot be gainfully applied and the weak form of the efficient market hypothesis cannot be rejected.
نتیجه گیری انگلیسی
Adding to the work of Lo et al. (2000) for the US and Dawson and Steeley (2003) for the UK, we use Canadian stock market data to compare the number of ten well-known technical reversal patterns identified in the market data with the number of patterns found in the simulated price paths obtained from two null models, random walk and EGARCH. We simulate a range of price paths for each of the stocks to carry out statistical inference and determine the statistical significance of our results. Table 11 summarizes the results and shows the proportion of stocks with significantly greater number of patterns identified in the real data than in the simulated data with two null models. The results we obtain using the random walk model are quite different from those obtained with EGARCH: with random walk over the period 1980–2010, for example, we find that 42% of All stocks have HS patterns; 40% has IHS. However, for the same period the subgroup All stocks under EGARCH we detect only 6% of stocks as having HS and IHS patterns. Although, we fail to reject the null hypothesis of weak form efficiency on the TSX, some sectors of the Canadian economy appear to be less efficient than others. A further breakdown of data into subperiods and subsequent analysis of each of these periods leads us to the same conclusion. 20 The data we collected on the number of reversal patterns identified through a pattern recognition algorithm were aggregated over all 30 years. However, over this period, economic conditions as well as the technological advances which enable today's markets to share information instantly and across several trading floors have changed. On April 23, 1997, for example, the TSE's trading floor closed, making it the second-largest stock exchange in North America to choose a floorless, electronic (or virtual trading) environment. In 2000, the Toronto Stock Exchange became a for-profit company. These economic and technological events make a natural break point in the sample. Thus, analysis of the market efficiency is more appropriately done over smaller sub-periods.