دانلود مقاله ISI انگلیسی شماره 764
عنوان فارسی مقاله

بررسی اثرات زیان آور اطلاعات آلوده در مدیریت ریسک

کد مقاله سال انتشار مقاله انگلیسی ترجمه فارسی تعداد کلمات
764 2011 15 صفحه PDF سفارش دهید محاسبه نشده
خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.
عنوان انگلیسی
The pernicious effects of contaminated data in risk management
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Banking & Finance, Volume 35, Issue 10, October 2011, Pages 2569–2583

کلمات کلیدی
مقررات سرمایه - بازرگانی اختصاصی - ارزش در ریسک - سود و زیان - آزمون بازگشتی -
پیش نمایش مقاله
پیش نمایش مقاله بررسی اثرات زیان آور اطلاعات آلوده در مدیریت ریسک

چکیده انگلیسی

Banks hold capital to guard against unexpected surges in losses and long freezes in financial markets. The minimum level of capital is set by banking regulators as a function of the banks’ own estimates of their risk exposures. As a result, a great challenge for both banks and regulators is to validate internal risk models. We show that a large fraction of US and international banks uses contaminated data when testing their models. In particular, most banks validate their market risk model using profit-and-loss (P/L) data that include fees and commissions and intraday trading revenues. This practice is inconsistent with the definition of the employed market risk measure. Using both bank data and simulations, we find that data contamination has dramatic implications for model validation and can lead to the acceptance of misspecified risk models. Moreover, our estimates suggest that the use of contaminated data can significantly reduce (market-risk induced) regulatory capital.

مقدمه انگلیسی

By gradually expanding their activities, modern banks have exposed themselves to a broader risk spectrum. In response, they have developed large-scale risk-management systems to monitor risks within their banking and trading books. Over the past 15 years, these internal risk models have been increasingly used by banking regulators to impose on banks minimum levels of capital. If inaccurate, in-house risk assessments can lead to inappropriate levels of regulatory capital. Hence, the validation process of internal risk models turns out to be of paramount importance to guarantee that banks have adequate capital to cope with unexpected surges in losses and long freezes in financial markets. Nevertheless, the recent financial turmoil has cast serious doubt on current practices and calls for a more rigorous examination of banks’ risk models. Following a series of risk management failures (Stulz, 2008 and Stulz, 2009), new proposals on capital regulation have flourished at an unprecedented pace (Basel Committee on Banking Supervision, 2009a). In this context of profound regulatory uncertainty, it has never been so imperative for banks to prove that their risk-management systems are sound. In this paper, we analyze the process by which banks appraise the validity of their risk models. Using a sample that includes the largest commercial banks in the world, our analysis reveals a key inconsistency in the way banks validate their models. We uncover that most banks use inappropriate data when testing the accuracy of their risk models. In particular, we document that a large fraction of banks artificially boost the performance of their models by polluting their profit-and-loss (P/L) with extraneous profits such as intraday revenues, fees, commissions, net interest income, and revenues from market making or underwriting activities. In order to understand the inconsistency identified in this paper, consider a simple bank that only trades one asset, say asset A. To measure its market risk exposure and determine its regulatory capital, the bank typically computes its 1-day ahead 99% Value-at-Risk (VaR), which is simply the VaR of asset A times the number of units owned at the end of a given day.1 The “perimeter” of the VaR model includes all trading positions that are marked-to-market, i.e., the trading book of the bank. Periodically, the banking regulator checks whether the VaR model is producing accurate figures. To do so, it compares the daily P/L of the trading portfolio to the daily VaR, a process known as backtesting. If the model is correctly specified, the bank should experience a VaR exception (i.e. P/L lower than VaR) one percent of the time, that is 2.5 days per year. To formally validate its model, the bank faces two key requirements. First, as VaR is based on yesterday’s positions, the P/L used in backtesting must imperatively reflect the gains and losses that would result from yesterday’s positions. Second, the P/L must only include items that are used to compute the VaR. As a result, it should not comprise intraday trading revenues (due to changes in the number of assets owned) and revenues and fees from activities that are not included in the risk model perimeter. If it does, the P/L is contaminated and backtesting may be severely flawed. The issue of P/L contamination is not new. Indeed, it was already mentioned by the Bank for International Settlements (BIS) in the 1996 Amendment of the Basel Accord: “While this is straightforward in theory, in practice it complicates the issue of backtesting. For instance, it is often argued that value-at-risk measures cannot be compared against actual trading outcomes, since the actual outcomes will inevitably be “contaminated” by changes in portfolio composition during the holding period. According to this view, the inclusion of fee income together with trading gains and losses resulting from changes in the composition of the portfolio should not be included in the definition of the trading outcome because they do not relate to the risk inherent in the static portfolio that was assumed in constructing the value-at-risk measure. […] To the extent that the backtesting program is viewed purely as a statistical test of the integrity of the calculation of the value-at-risk measure, it is clearly most appropriate to employ a definition of daily trading outcome that allows for an “uncontaminated” test.”

نتیجه گیری انگلیسی

The latest financial crisis has demonstrated that miscalculating risk exposures can be lethal for financial institutions. Inaccurate risk assessments can lead to both excessive risk exposures and capital charges that are not sufficient to absorb losses. This concern is at the center of the current debate on the regulation of financial institutions. As a result, it has never been so urgent for banks to convince the general public and politicians that the risk-management systems in place are sound and efficient. In this study, we identify a major inconsistency in the way banks validate their risk models. We find that most banks use contaminated data when assessing the quality of their models. This practice significantly alters backtesting results and may lead to inadequate regulatory capital. There are two ways of addressing the problem we document in this paper. One way is for the banking regulators to clearly state what needs to be included in the P/L, and what needs to be stripped. Alternatively, regulators can let each bank choose the P/L definition that best fits their business lines. The former approach has the advantage of standardizing risk disclosure and easing comparison across banks. The benefit of the latter approach is that it offers the necessary flexibility to the banks to choose the risk management practices that fit their needs. In both cases, risk managers and regulators must check that the data used to validate a given risk model only include items that are modeled in this risk model. Overall, this paper shows that the quality of the data used in risk management can be as important as the risk model in place. As such, the findings point to several interesting avenues for future research, two of which we outline here. First, it would be interesting to investigate whether data contamination also plagues the hedge fund industry. Indeed, VaR is often the preferred risk measure used by hedge fund managers to communicate about their risk-taking behavior. If confirmed, data contamination would have even stronger implications for backtesting since hedge funds compute VaR with a 1-month horizon and they rebalance their portfolio at a much higher frequency. Second, we do not attempt to examine the potential strategic dimension of data contamination. As a matter of fact, a legitimate idea would be to study whether non-systematic use of fees and commissions or strategic marking-to-model of positions can lead to some form of “P/L management”, in the spirit of earnings management (Burgstahler and Dichev, 1997). We look forward to additional research on these and related questions.

خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.