ارزیابی تصادفی از قراردادهای بیمه عمر: مدل نقطه روی مسیرهای دارایی و اندازه گیری خطا مربوط به تجمع
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|24364||2012||8 صفحه PDF||سفارش دهید||6240 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Insurance: Mathematics and Economics, Volume 51, Issue 3, November 2012, Pages 624–631
In this paper,1 we are interested in the optimization of computing time when using Monte-Carlo simulations for the pricing of the embedded options in life insurance contracts. We propose a very simple method which consists in grouping the trajectories of the initial process of the asset according to a quantile. The measurement of the distance between the initial process and the discretized process is realized by the L2-norm. L2 distance decreases according to the number of trajectories of the discretized process. The discretized process is then used in the valuation of the life insurance contracts. We note that a wise choice of the discretized process enables us to correctly estimate the price of a European option. Finally, the error due to the valuation of a contract in Euro using the discretized process can be reduced to less than 5%.
The implementation of an asset/liability management (ALM) model for the management and economic capital evaluation of life insurance contracts requires a very important volume of computations within the framework of Monte Carlo simulations. Indeed, for each trajectory of the asset, the whole of the liability must be simulated, because of the strong interactions between the asset and the liability through the ratchet and through the redistribution of the financial and technical results (cf. Planchet et al. (2011)). This leads to the well known problem of nested simulations (cf. Bauer et al. (2010) and Gordy and Juneja (2008)). Various approaches were developed to overcome the practical difficulty of implementing the nested simulation approaches, among which the most used are optimizations inspired from the importance sampling (cf. Devineau and Loisel (2009)) and the techniques of replicating portfolio (cf. Revelen (2009), Schrager (2008) and Chauvigny and Devineau (2011)). More recently, Bauer et al. (2010) have used the LSMC approach initially proposed by Longstaff and Schwartz (2001) for the pricing of American options. However, optimization techniques are conceived generally for the estimation of the quantile of the excess asset/liability in the framework of the determination of the economic capital and are not always suited to compute the best estimate of the provision. Replicating portfolio approaches are wrongly adapted to the context of French insurance life contracts because of the complexity required when implementing clauses of redistribution of the financial discretionary benefit. Therefore, practitioners sometimes use a method consist in summarizing the possible evolutions of the asset process in a limited number of characteristic trajectories. This results in proposing a limited number of scenarios of evolution for the asset process, each of these scenarios being characterized by a probability of occurrence. The difficulty is to build the scenarios in an optimal way in order to obtain a good approximation of the value of the provision. The objective of this paper is to propose a method to build these characteristic trajectories and to provide tools to measure the impact of this simplification on the results. So we provide here a tool for best estimate computing which can be used together with other optimization techniques. To achieve this goal in an objective manner, we propose a simple discretization of the distribution of the underlying trajectories in an L2L2 Hilbert space. Many papers deal with the question of the time discretization of the path of the process (see for example the work of Gobet (2003) and the numerous references therein) and the question of the bias reduction. We will adopt in this paper a different point of view and focus on the discretization of the distribution of the paths. More precisely, a stochastic process SS such as those considered here can be viewed as a random variable in an L2L2 space. The probability distribution of SS is in practice considered as continuous. What we want to do is to find a discrete probability distribution that is “not too far” from the true one. We do not think there is many works on this topics.
نتیجه گیری انگلیسی
In this paper we are interested in a simple technique of reduction of the computing time when using Monte-Carlo simulations for the pricing of the embedded options in life insurance contracts. This technique is very easy to implement, it consists in grouping together the trajectories of the initial process according to the quantiles of the distribution all the time. The discretized process is then used in the valuation of the life insurance contracts. We note that a wise choice of the partition of [0,+∞[[0,+∞[ allows the correct estimation of the price of a European option. These options are met in unit-linked life insurance contracts with death minimum guarantee. We also show that the error due to the valuation of a contract in Euro using the discretized process can be reduced to less than 5% when we replace 100,000 of the trajectories of the initial process by 100 trajectories of the discretized process. This error increases with the maturity of contract but is independent of age of the policy-holder. To use this technique, it is necessary to know the distribution of the initial process. Indeed, in addition to the constitution of the trajectories discretized, it is essential to be able to estimate the probability of occurrence of those. The comparison of the sample of trajectories of the initial process to that of the discretized process shows clearly that the latter underestimates strongly the extreme values of the initial process. Thus, if the technique of discretization can give good results of TMG guarantee or MCEV, its use within the framework of estimating the extreme values (SCR, VAR…) can lead to biased results. However, the choice of a partition whose extreme values are strongly refined could possibly lead to reduce errors. This aspect was not treated in this article and could be the object of future developments.