دانلود مقاله ISI انگلیسی شماره 29009
ترجمه فارسی عنوان مقاله

رویکرد شبکه های بیزی برای ریسک عملیاتی

عنوان انگلیسی
A Bayesian Networks approach to Operational Risk
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
29009 2010 8 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Physica A: Statistical Mechanics and its Applications, Volume 389, Issue 8, 15 April 2010, Pages 1721–1728

ترجمه کلمات کلیدی
ریسک عملیاتی - سیستم های پیچیده - شبکه های بیزی - سری زمانی - ارزش در معرض خطر - مختلف بار ارتباط -
کلمات کلیدی انگلیسی
Operational Risk, Complex systems, Bayesian Networks, Time series, Value-at-risk, Different-times correlations,
پیش نمایش مقاله
پیش نمایش مقاله  رویکرد شبکه های بیزی برای ریسک عملیاتی

چکیده انگلیسی

A system for Operational Risk management based on the computational paradigm of Bayesian Networks is presented. The algorithm allows the construction of a Bayesian Network targeted for each bank and takes into account in a simple and realistic way the correlations among different processes of the bank. The internal losses are averaged over a variable time horizon, so that the correlations at different times are removed, while the correlations at the same time are kept: the averaged losses are thus suitable to perform the learning of the network topology and parameters; since the main aim is to understand the role of the correlations among the losses, the assessments of domain experts are not used. The algorithm has been validated on synthetic time series. It should be stressed that the proposed algorithm has been thought for the practical implementation in a mid or small sized bank, since it has a small impact on the organizational structure of a bank and requires an investment in human resources which is limited to the computational area.

مقدمه انگلیسی

In the past years a powerful set of tools to study complexity has been developed by physicists and applied to economic and social systems; among the several topics under investigation the quantitative estimation and management of several typologies of risks [1], like financial risk [2], [3], [4], [5] and [6] and operational risk [7] and [8] has recently emerged. Operational Risk (OR) is defined as “the risk of [money] loss resulting from inadequate or failed internal processes, people and systems or from external events” [9], including legal risk, but excluding strategic and reputation linked risks. Since it depends on a family of heterogeneous causes, in the past only few banks dealt with OR management. Starting from 2005 the approval of “The New Basel Capital Accord” (Basel II) has substantially changed this picture: in fact OR is now considered a critical risk factor and banks are prescribed to cope with it setting aside a certain capital charge. Basel II proposes three methods to determine this capital: (i) the Basic Indicator Approach sets it to 15% of the bank’s gross income; (ii) the Standardized Approach is a simple generalization of the Basic Indicator Approach: the percentage of the gross income is different for each Business Line and varies between 12% and 18%; (iii) the Advanced Measurement Approach (AMA) allows each bank to use an internally developed procedure to estimate the impact of OR. Both the Basic Indicator Approach and the Standardized Approach seems overly simplistic, since in some way they suppose that the exposure of a bank to operational losses is proportional to its size. On the other hand, an AMA not only helps a bank to set aside the required capital charge, but may even allow the OR management, in the prospect of limiting the amount of future losses. Each AMA has to take into account two types of historical operational losses: the internal ones, collected by the bank itself, and the external ones which may belong to a database shared among several banks. Nevertheless, due to the recent interest for OR, only small and not adequately accurate historical databases exist and this is why each AMA is required to use also assessment data produced by experts. In addition, Basel II provides a classification of operational losses in 8 Business Lines and 7 Loss Event Types which has to be shared by all the AMAs. Finally, AMAs usually identify the capital charge with the Value-at-Risk (VaR) over the time horizon of 1 year and with a confidence level of 99.9%, defined as the maximum potential loss not to be exceeded in 1 year with confidence level of 99.9%, i.e. the 99.9 percentile of the yearly loss distribution; this implies that the probability of registering a loss being less than or equal to the value of the VaR in 1 year is equal to 0.999 or, equivalently, that a loss larger than the value of the VaR may occur on average every 1000 years. Among the AMA methods, the most widely used is the Loss Distribution Approach (LDA). In LDA the distribution of frequency and the distribution of impact (severity) modeling the operational losses are separately studied for each of the 56 pairs (Business Line, Loss Event Type). LDA makes two crucial assumptions: (i) frequency and severity distributions are independent for each pair; (ii) the distributions of each pair are independent from the distributions of all the other pairs. In other words LDA neglects the correlations possibly existing between the frequency or the severity of the losses occurring in different pairs. The idea of exploiting BNs to study OR has already been proposed in Refs. [10], [11], [12], [13] and [14], and various approaches are possible. The main advantages offered by BNs are two: • the possible correlations among different bank processes can be captured; • the information contained into both assessments and historical loss data can be merged in a natural way. One approach [15] and [16] may be to design a completely different network for each bank process, trying to determine the relevant variables (in the context of each process) and the causal relationship among them; this kind of network has only one output node which typically represents the loss distribution for the process under investigation; the correlations among different processes can be captured by building a “super-network” which contains all the networks built for the single processes and in which the nodes representing the loss distributions of the processes may be connected by links. Since it deals with the variables governing the underlying dynamics of the bank, this approach seems to be the most convincing; nevertheless it suffers from some drawbacks as regards the practical implementation inside a bank: (i) domain experts are needed for each process, in order to properly identify the variables and to define the topology of each network; (ii) if the historical data need to be used, a system monitoring all the included variables with an acceptable frequency and accuracy has to be built; since this kind of network can easily reach large sizes (tens of variables), managing such systems is quite challenging and resource demanding for a mid or small sized bank. A simpler approach [17] is to design a unique network composed by a node for each process which represents its loss distribution; all nodes are output nodes and the operational losses are sufficient to build a historical database, so that collecting the data and managing them is much easier; in comparison with the previous approach even the experts’ task becomes simpler since their assessment reduces to an estimate of some parameters of the loss distributions; the correlations among different processes are captured through the topology of the network. This approach resembles a way of reasoning typical of the field of Complex Systems: the information carried by the “microscopic” degrees of freedom (the relevant variables identified in the first approach) is integrated out and the state of the system is represented by some “macroscopic” quantity (the loss distribution in the second approach). Let us remark that, as regards the practical implementation inside a bank, the difference between the two approaches is huge: in the first approach tens of variables for each process need to be monitored, while in the second approach only one variable per process (the registered losses) has to; considering that an AMA-oriented bank has to track its own internal losses in any case, the cost of the proposed implementation is minimum.

نتیجه گیری انگلیسی

A novel approach, based on Bayesian Networks, has been proposed for the quantitative management of Operational Risk in the framework of The New Basel Capital Accord. The main advantage of every BNs approach over the other AMAs (like the widely used LDA) is the possibility of capturing the correlations among different bank processes; however, as shown in Section 3, the different-times correlations play a significant role and are in no way negligible with respect to the same-time correlations, but (at least to the best of our knowledge) there is no other approach taking them into account. The need to deal with different-times correlations leads us to propose a solution for the problem of learning a BN using a time ordered set of operational losses (see Sections 3 and 4). The proposed approach is validated by means of synthetic data for two reasons; the first one is methodological: in such a way it is possible to generate data containing the information that the algorithm should extract; the second reason is practical: regulatory laws on Operational Risk exist from a relatively short period (see Section 1) and it is very difficult to obtain “experimental” data on operational losses which are accurate and reasonable in size. At the moment the number of processes is limited to N=3N=3 for computational reasons: the stochastic algorithm described in Section 5 has to explore a configuration space of size L!N2L!N2. On one hand, keeping LL low would imply a lower accuracy of the learning procedure (recall from Section 3 that the number of patterns used for the learning are L/TL/T) and this is not acceptable from the point of view of the validation of the proposed approach; this is especially true for the higher values of TT, those for which the approach proves to be consistent. On the other hand, since there is no assumption in our algorithm about the value of NN, the obtained results should not depend on it, at least qualitatively. The principal features of the proposed approach are the following: (1) the whole topology of the network is derived from data of operational losses; each node in the network corresponds to a bank process and the links between the nodes, which are drawn learning from data, model the causal relationships between the processes; this scheme seems more flexible than the classification in 56 pairs (Business Line, Loss Event Type) prescribed by Basel II and has the advantage of representing both the units that generate operational losses and the relationships between them; however it has to be pointed out that the decision not to use the assessment of domain experts is motivated by the need to carefully study the correlations present in the historical data: as hinted in Section 1, one of the possible developments is to take the assessments into account. (2) For the first time a Bayesian Network is used to represent the influence between correlated operational losses that take place in different days exploiting a dataset whose records represent losses occurred over TT days: using such a dataset the nodes in the network represent the aggregate loss over TT and the VaR over a time horizon TT can be computed. The extension to the VaR over the time horizon LL requires an additional assumption (see Section 4) and is performed by convoluting the probability density functions View the MathML sourceLT times and extracting the 99.9 percentile of the convoluted distribution. (3) The proposed approach is tailored for a practical implementation inside a mid or small sized bank: since the network contains only nodes representing the loss distributions over some time horizon, only the losses occurring in the different processes have to be monitored.