دانلود مقاله ISI انگلیسی شماره 1347
ترجمه فارسی عنوان مقاله

الگوبرداری الگوریتم های رگرسیون برای مدلسازی کاهش به طور پیش فرض (LGD)

عنوان انگلیسی
Benchmarking regression algorithms for loss given default modeling
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
1347 2012 10 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : International Journal of Forecasting, Volume 28, Issue 1, January–March 2012, Pages 161–170

ترجمه کلمات کلیدی
- باسل دوم - ریسک اعتباری - داده کاوی - پیش بینی - کاهش به طور پیش فرض ()
کلمات کلیدی انگلیسی
پیش نمایش مقاله
پیش نمایش مقاله  الگوبرداری الگوریتم های رگرسیون برای مدلسازی کاهش به طور پیش فرض (LGD)

چکیده انگلیسی

The introduction of the Basel II Accord has had a huge impact on financial institutions, allowing them to build credit risk models for three key risk parameters: PD (probability of default), LGD (loss given default) and EAD (exposure at default). Until recently, credit risk research has focused largely on the estimation and validation of the PD parameter, and much less on LGD modeling. In this first large-scale LGD benchmarking study, various regression techniques for modeling and predicting LGD are investigated. These include one-stage models, such as those built by ordinary least squares regression, beta regression, robust regression, ridge regression, regression splines, neural networks, support vector machines and regression trees, as well as two-stage models which combine multiple techniques. A total of 24 techniques are compared using six real-life loss datasets from major international banks. It is found that much of the variance in LGD remains unexplained, as the average prediction performance of the models in terms of R2 ranges from 4% to 43%. Nonetheless, there is a clear trend that non-linear techniques, and in particular support vector machines and neural networks, perform significantly better than more traditional linear techniques. Also, two-stage models built by a combination of linear and non-linear techniques are shown to have a similarly good predictive power, with the added advantage of having a comprehensible linear model component.

مقدمه انگلیسی

With the recent turmoil in credit markets, the topic of credit risk modeling has arguably become more important than ever before. Also, to comply with the Basel II Accord introduced at around the same time, financial institutions have had to invest heavily in the development of improved credit risk models. The Basel II Capital Accord sets out a framework that regulates the minimum amount of capital that financial institutions are required to hold as a safety cushion against unexpected credit-, market- and/or operational losses. More specifically, the accord allows institutions to build credit risk models for three key risk parameters: probability of default (PD), loss given default (LGD) and exposure at default (EAD). From these, the regulatory capital is then derived. So far, credit risk research has largely focused on the estimation and validation of the PD parameter. On the other hand, the LGD parameter measures the economic loss, expressed as a percentage of the exposure, in case of default. In other words, LGD is the proportion of the remaining loan amount that the bank would not be able to recover. This parameter is a crucial input to the Basel II regulatory capital calculations, as it enters the capital requirement formulas in a linear way (unlike PD, which therefore has less of a direct effect on minimum capital). Hence, any changes in the LGD estimates produced by models have a strong bearing on the capital of a financial institution, and thus its long-term strategy as well. It is therefore crucial to have models that estimate LGD as accurately as possible. This is not straightforward, however, as industry models typically show low R2R2 values, particularly for consumer lending portfolios. Such models are often built using ordinary least squares regression or regression trees (Bastos, 2009, Bellotti and Crook, 2007, Caselli and Querci, 2009 and Gupton and Stein, 2002). Using a set of six real-life default loss datasets, this first large-scale LGD benchmarking study investigates whether or not other approaches can improve the prediction performances of these LGD models. The remainder of this paper is organized as follows. Section 2 gives an overview of the regression techniques examined, the performance metrics used to evaluate and compare the models, the available datasets and the experimental set-up used in this study. Next, Section 3 reports and discusses the experimental results obtained, and Section 4 concludes the paper.

نتیجه گیری انگلیسی

In this first large-scale LGD benchmarking study, 24 regression techniques were evaluated on six real-life datasets obtained from major international banking institutions. The average performances of the models in terms of R2R2 ranged from 4% to 43%, showing that several of the resulting models have a limited explanatory power. Nonetheless, a clear trend can be seen that non-linear techniques, and support vector machines and artificial neural networks in particular, yield significantly better model performances than those of more traditional linear techniques. This suggests the presence of non-linear relationships between the independent variables and LGD, contrary to previous benchmarking studies on PD modeling, where the difference between linear and non-linear techniques was not that explicit. Therefore, the study has clearly demonstrated the potential of applying non-linear techniques to LGD modeling, possibly combined with a linear model component in a two-stage setting so as to improve the comprehensibility of the resulting models.