دانلود مقاله ISI انگلیسی شماره 24152
ترجمه فارسی عنوان مقاله

بوت استرپ آزمایش های جی مدل رگرسیون خطی غیرتو در تو

عنوان انگلیسی
Bootstrap J tests of nonnested linear regression models
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
24152 2002 27 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Econometrics, Volume 109, Issue 1, July 2002, Pages 167–193

ترجمه کلمات کلیدی
تست خود راه انداز - فرضیه های غیرتو در تو - شبیه سازی - رگرسیون خطی
کلمات کلیدی انگلیسی
Bootstrap test, Nonnested hypotheses, Simulation, Linear regression,
پیش نمایش مقاله
پیش نمایش مقاله  بوت استرپ آزمایش های جی مدل رگرسیون خطی غیرتو در تو

چکیده انگلیسی

The J test for nonnested regression models often overrejects very severely as an asymptotic test. We provide a theoretical analysis which explains why and when it performs badly. This analysis implies that, except in certain extreme cases, the J test will perform very well when bootstrapped. Using several methods to speed up the simulations, we obtain extremely accurate Monte Carlo results on the finite-sample performance of the bootstrapped J test. These results fully support the predictions of our theoretical analysis, even in contexts where the analysis is not strictly applicable.

مقدمه انگلیسی

Numerous procedures for testing nonnested regression models have been developed, directly or indirectly, from the pathbreaking work of Cox 1961 and Cox 1962. The most widely used, because of its simplicity, is the J test proposed in Davidson and MacKinnon (1981); see McAleer (1995) for evidence on this point. Like almost all nonnested hypothesis tests, the J test is not exact in finite samples. Indeed, as many Monte Carlo experiments have shown, its finite-sample distribution can be very far from the N(0,1) distribution that it follows asymptotically. Several ways have been proposed to improve the finite-sample properties of the J test. Fisher and McAleer (1981) proposed a variant, called the JA test, which is exact in finite samples under the usual conditions for t tests in linear regression models to be exact; see Godfrey (1983). Unfortunately, the JA test is often very much less powerful than other nonnested tests; see, among others, Davidson and MacKinnon (1982) and Godfrey and Pesaran (1983). The latter paper suggested a different approach, applied not to the J test but to variants of the Cox test based on the work of Pesaran (1974). This approach first corrects the bias in the numerator of the test statistic, then estimates the variance of the corrected numerator, and finally calculates a t-like statistic. It does not yield exact tests, but it does yield tests that perform considerably better than the J test under the null and have good power. More recently, Fan and Li (1995) and Godfrey (1998) have suggested bootstrapping the J test and other nonnested hypothesis tests. Because the J test is cheap and easy to compute, this is very easy to do. The Monte Carlo results in these papers suggest that bootstrapping the J test often works very well. However, neither paper provides any theoretical explanation of why it does so. In this paper, we develop a theoretical approach that enables us to show precisely what determines the finite-sample distribution of the J test. We explain why it often works very badly without bootstrapping and why it almost always works very well indeed when bootstrapped. The theory allows us to identify situations in which the tests can be expected to achieve their worst behavior, and our Monte Carlo experiments focus on these. Since the tests perform very well even in such situations, the experiments need to be very accurate. Fortunately, our theory provides a low-cost way to perform experiments that use extremely large numbers of replications. The assumptions needed for our theoretical analysis are fairly restrictive: The errors are assumed to be normally distributed, and the regressors are assumed to be exogenous. However, additional Monte Carlo experiments strongly suggest that these assumptions are not crucial. Even when both of them are violated, the bootstrap J test performs in almost exactly the same way as it does when they are satisfied. In the next section, we briefly describe the J test. In Section 3, we derive a theoretical expression for the test statistic and use it to obtain a number of interesting results. In Section 4, we use a combination of theory and simulation to study the finite-sample properties of the asymptotic J test. In Section 5, we study the finite-sample properties of the bootstrap J test. In Section 6, we relax the restrictive assumptions made up to this point and show that the bootstrap J test works extraordinarily well in almost every case in which a nonnested test is worth doing. Finally, in Section 7, we briefly discuss the effect of bootstrapping on the power of the J test.

نتیجه گیری انگلیسی

Most Monte Carlo experiments on the performance of hypothesis tests are not very conclusive. They often suffer from excessive experimental error, and they inevitably deal with only a tiny subset of all the possible DGPs. In contrast, except for the thousands of experiments with random parameters discussed in Section 6, which allow us to deal with a very large number of DGPs, our experiments utilize very large numbers of replications. In the case of the experiments of Section 5, our theoretical results made this feasible. In the case of the experiments of Section 6, we were able to use a previous theoretical result to avoid actually computing bootstrap tests for most sample sizes. The principal reason that our results are quite conclusive is that they are based on a detailed theory of the finite-sample distribution of the J test. This theory shows that the value of a parameter that we call ||θ|| is crucial. Based on this theory, we were able to identify cases in which the bootstrap J test can be expected to work particularly badly, and we made these the focus of our experiments. That the test nevertheless works extraordinarily well, albeit somewhat less well in extreme cases where ||θ|| is very small, provides very strong evidence that the bootstrap J test is a reliable procedure in general. Although our theoretical results were developed for the case in which the error terms are normally distributed and the regressors are exogenous, there are, as we explained in Section 3, good reasons to believe that they apply more generally. In Section 6, we provided a great deal of simulation-based evidence that they in fact do so. The theoretical results also did not deal with the case in which the null hypothesis is false, but general results on bootstrap tests, which are confirmed by the simulations of Section 7, suggest that bootstrapping the J test will have little effect on its size-corrected power.