ارزش در معرض ریسک : رویکرد نظری به قیمت گذاری و عملکرد سیستم های اندازه گیری ریسک
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|1895||2012||15 صفحه PDF||سفارش دهید||7160 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Economics and Business, Volume 64, Issue 3, May–June 2012, Pages 199–213
Risk-based capital adequacy requirements are the main tool employed by government regulators to assure bank stability. This approach allows banks to choose from a number of alternative methods for calculating the required capital. Many systems for measuring risk differ significantly in cost, precision, and in the potential “capital savings”. We develop a statistical model for evaluating risk measurement systems and optimizing the selection process. The model is based on queuing theory. The selection of the optimal system is a function of available capital, the volume and the character of bank activity. While the most precise system may lower a bank's minimal capital reserve requirements, it is not necessarily the optimal system once total costs are evaluated.
Regulators in many countries use capital adequacy requirements to control the stability of banks by restricting their exposure to risk. The initial Basel 1988 Capital Accord was based on a fixed ratio of risk-weighted assets (typically 8%) that should be financed by equity or equity-equivalent instruments. This accord has been adopted in most countries (Annual Report of BIS) and is designed to address credit risk. The original Capital Accord has been revised a few times since then. The revised capital adequacy framework is based on a more flexible approach to risk management in banking. As stated in the annual report of BIS, “While there is a continued focus on internationally active banks, the underlying principles should be suitable for application to banks of varying levels of complexity and sophistication in all countries”. The Basel 1988 Accord reflects credit risk and is applied somewhat arbitrarily to the whole portfolio with allowances for risk-weighting the asset categories. The Basel 1998 Accord recognizes a part of the portfolio as a trading book. A different approach is used for measuring the risk of this part, which is comprised of publicly traded assets. Internal models based on either standard or VaR based evaluation are allowed. The Basel 2 approach takes this one step further by allowing internal models for measuring (non-traded) credit risk. In the last few years BIS conducted a series of studies (QIS) in order to gauge the impact of the new approach on minimal capital requirements (MRC). We focus in this paper on the advantages of sophisticated models of risk measurement and specifically on the capital savings reflected in reducing MRC. Minimal capital requirements are contingent primarily on risk assessment, which includes both market and credit risk measurement. In many countries the first part – market risk assessment – is already implemented. This means that banks employ internal models to quantify their market risk. The current regulatory framework gives banks greater discretion by allowing them to choose between the standard incremental risk and better tailored VaR-based approaches to risk management (see Jackson, Maude, and Perraudin (1997)). VaR for capital requirements is measured as the lower 1% quantile of the Profit & Loss distribution over a 10-business day horizon. Models employed to calculate VaR vary between institutions both in terms of their sophistication and the risk factors used. This measure became very popular during the last decade; particularly in view of the “capital savings” it affords financial institutions. In general, adoption of the VaR approach has enabled banks to meet capital adequacy requirements with capital reserves of less than the standard 8%. However, the measure itself has several problems, such as the lack of sub-additivity.1 Moreover, the methods used for calculating VaR are based on different assumptions, and often produce results with low precision. Banks may pay dearly for over-simplification and risk incurring regulatory surcharges for inaccurate internal models. In spite of these shortcomings, there remains a general consensus that the correct approach to capital adequacy is based on probability distribution rather than an arbitrary rule of a thumb, such as the flat 8% of assets mandated in the original 1988 Basel Accord. The goal of this paper is not to compare the different approaches of risk measurement, but rather to price the added value to a bank from using an internal model for risk measurement. Our model quantifies the benefit of lowering the minimal capital requirements for a given level of risk against the cost of developing and employing a risk measurement system. There are many ways to implement VaR. They vary in cost and in their degree of precision. Similarly, the capital savings derived from employing VaR vary across institutions and the specific models employed. An optimal choice regarding risk measurement systems, therefore, is contingent on the capitalization of a bank, and the type of primary activity in which the bank engages. In this paper we propose a model for optimizing the selection of a risk measurement system. The list of available software for measuring risk is long and constantly growing (see for example Kates (2000) for a comparison of 50 different systems). These programs calculate the required capital according to the standard model or according to the P & L distribution. The implementation of an effective risk management system should improve the bank's performance, i.e. allow for better risk diversification and more precise hedging. From the standpoint of the bank, an optimal decision weighs the costs required in developing or purchasing such a system against performance benefits. A different approach based on the optimisation of the capital structure of a financial firm, is described in Shepheard-Walwyn and Litterman (1998). Many ready-to-use systems are currently available: CARMA, RiskWatch, RiskMetrics, Four Fifteen, Outlook, TARGA, Kamakura and Panorama, to mention a few. The prices of the software vary from a few thousand dollars a year for lower-end products to millions of dollars a year for the upper end. In addition to the initial investment, the costs of implementation, updates, databases, salaries and other expenses can contribute significantly to the total cost of ownership (see Spain (2000)). Risk measurement systems differ in their approaches and underlying assumptions. While some are based on historical simulations, others use the variance-covariance method (based on normality and linearity assumptions). More sophisticated systems employ Monte Carlo methods for risk measurement. Some research has been done in comparing different systems and approaches (see for example Marshall and Siegel (1997), and Vlaar (2000)). Because software packages change and are constantly updated they vary greatly in their specifications and cannot be easily compared. However, a significant correlation has been observed between the price and the precision of these systems, where generally the more expensive systems are also the more sophisticated and precise. Not all banks are necessarily interested in the most sophisticated system. Many, particularly small and medium-sized banks shy away from active risk management and incorporate risk measurement systems solely to satisfy regulatory requirements. A bank employing risk measurement systems primarily for reporting purposes will likely seek a less expensive solution that probably renders less precise results. Imprecision may lead to an overestimation of risk that will be “corrected” by maintaining a higher level of capital reserves. Higher de facto capital requirements may become binding in the sense that after committing to a certain system the bank will need additional capital reserves to extend business. The selection of a risk measurement system carries implications that extend beyond the realm of portfolio. Even for banks that opt not to engage in active risk management, the selection of a risk measurement system can have a significant impact on business strategy and financial structure. In a perfect market, capital requirements should not constitute a significant restriction (as predicted by the Modigliani–Miller Propositions) to business activity. However, in reality, issuing new capital is costly and is often interpreted by the market as a negative signal. Accordingly, banks should prefer developing a more precise risk measurement system to raising new capital. In any case, an optimal decision must take these considerations into account. An optimal risk measurement system must not only accurately capture the risk profile of the bank's asset portfolio, but enable the bank to optimize its performance as well. In recent years we have witnessed the growing role of securitization as part of risk management practices as well as an extensive use of various off balance sheet activities. These steps are taken in order to reduce capital requirements by mitigating some types of risks and transferring them to other market participants. However as the financial crisis of 2008 clearly demonstrated not every type of hedging eliminates risks completely. Often hedging only reduces some components of risk, typically leaving basis risks (and other types of risks) with the originating bank. Here again a more advanced risk management system can provide an advantage and capital savings by using more precise methods of risk measurement. Other recent developments are various types of hybrid instruments used for raising capital, from subordinated debt with coupon payments subject to regulatory requirements to CoCo (Contingent Convertible) bonds. Regulators are still considering how to treat these forms of capital. All this together with the continuing consolidation of banks creates an incentive to make a strategic selection of the most appropriate risk measurement system that matches the specific needs of each bank and is able to deal with various types of risk, as opposed to using old and oversimplified methods similar to the Cooke (1998–2006) and McDonough (from 2007) ratios. Our goal is to develop a simple and intuitive model that can be used by banks in their decision to choose the most appropriate risk measurement system. The model takes into account such parameters as the percentage of traded assets, the available capital and typical activity of a bank.
نتیجه گیری انگلیسی
The current regulatory environment in financial markets around the world encourages the adoption of active risk management. Since Basel 1998 and Basel 2000, regulatory requirements in a growing number of countries allow banks to depart from fixed capital adequacy rates and develop internal models better tailored to the task of assessing asset risk. Sophisticated risk measurement systems enable financial institutions to reduce minimal capital requirements by using more precise models. To take advantage of this opportunity, banks have to discover the optimal tools for measuring risk. Each bank must factor in the level of precision with which it chooses to measure risk. The selection of an optimal model affects not only the level of capital required to meet regulatory requirements, but can have a significant impact on both the character and scope of a bank's business activity. Opportunity costs can be avoided and profitability increased through the selection of the proper risk measurement system. In this paper, we provide a framework for the optimal selection of a risk measurement system under the assumption that a higher degree of accuracy corresponds to a higher cost to the bank. We limit our analysis to the simplest use of such a system – the reduction of the required capital, however the same approach can be extended to incorporate additional benefits to active risk management.15 This model reflects the reality that many small to medium sized banks are interested in risk measurement models mainly due to the pressure of regulators through capital requirements. The suggested model is simple, intuitive and flexible. The results of the model demonstrate that the most accurate (and therefore expensive) systems are appropriate for bigger banks with low capitalization, which operate in an unstable environment. These banks face radical and dynamic variations between high and low levels of activity. The higher the capital cushion the lower the cost of an optimal risk measurement system. Similarly, a stable business environment (many small independent projects) decreases the optimal cost, while a more volatile environment (fewer, short-term larger projects) requires a more sophisticated system at a higher cost. An extension of this model might incorporate an analysis of various types of activity, such as loans and deposits with a discount window. Due to the simplicity and ease of use of this model, further variations can be introduced into the general framework presented here.