دانلود مقاله ISI انگلیسی شماره 24371
ترجمه فارسی عنوان مقاله

رویکرد ترافیک سنگین به مدل سازی پرتفوی بیمه عمر بزرگ

عنوان انگلیسی
A heavy traffic approach to modeling large life insurance portfolio
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
24371 2013 15 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Insurance: Mathematics and Economics, Volume 53, Issue 1, July 2013, Pages 237–251

ترجمه کلمات کلیدی
رویکرد ترافیک سنگین - پرتفوی بیمه عمر
کلمات کلیدی انگلیسی
heavy traffic approach, life insurance portfolio
پیش نمایش مقاله
پیش نمایش مقاله  رویکرد ترافیک سنگین به مدل سازی پرتفوی بیمه عمر بزرگ

چکیده انگلیسی

We explore a new framework to approximate life insurance risk processes in the scenario of plentiful policyholders, via a bottom-up approach. Given the insurance contract structure, we aggregate the balance of individual policy accounts, and derive an approximating Gaussian process with computable correlation structure. The methodology is borrowed from heavy traffic theory in the literature of many-server queues, and involves the so-called fluid and diffusion approximations. Our framework is different from the individual risk model in that it takes into account the time dimension and the specific policy structure including the premium payments. It is also different from classical risk theory in that it builds the risk process from micro-level contracts and parameters instead of assuming aggregated claim and premium processes outright. As a result, our approximating process behaves differently depending on the issued contract structure. We also illustrate the flexibility of our approach by formulating a finite-horizon ruin problem that incorporates actuarial reserve in the consideration.

مقدمه انگلیسی

The study of risk processes is a central topic in actuarial science. Most of the literature focuses on the calculation of ruin probability and deficits (or overshoots) at the time of ruin, as well as the optimal control of premiums, reinsurance levels, and investment allocation. These questions have been studied under a variety of stochastic settings, from the classical Cramer–Lundberg approximation to diffusion processes. The central theme is that random-walk-type models, with a negatively drifted premium process and a jump process of claims, provide a rich framework to allow plenty of extensions, modifications and problem formulations (see, for example, Asmussen and Albrecher, 2010 for the survey on ruin probability calculations, and Schmidli, 2008 for the counterpart in stochastic control problems). In this paper, we take a different view from the existing literature. Rather than focusing on the computation of risk-related quantities, we explore the question of the construction of risk process itself. The approach we use is bottom-up: given the structure and parameters of the individual insurance contracts, how does the risk process of the insurer look like on an aggregate scale? Naturally, the risk process under this framework is the sum of all the individual accounts i.e. the balances of policyholders who entered contract with the insurer over time. For actuaries, this points to the standard one-period individual and collective risk models. However, these standard models do not consider the time dimension. This in turn also restrains the power of such models to capture the specific contract structure involved e.g. the premium payments. In this regard, our work can be seen as a generalization of the standard risk models to a process-level approximation. Of course, mere summation of all individual accounts might end up getting an unpleasant process that is hardly computable. To tackle this issue, we borrow techniques in so-called heavy traffic theory in the queueing literature. The basic idea is that under the assumption of large number of customers or policyholders, one can approximate the functionals of these policyholders’ statuses using fluid and diffusion approximations. In the statistics literature, these correspond to stochastic-process versions of Law of Large Numbers and Central Limit Theorems. With the sheer scale of major insurance companies, the assumption of plentiful policyholders is sensible, and so these approximation techniques can be used. As we will see, these heavy traffic approximation would then lead to a Gaussian process that is as analyzable as many standard processes used in the current risk theory literature. In particular, the correlation structure of this Gaussian process is explicitly computable given the contract structure (see Section 4). To illustrate our argument on tractability, we formulate a finite-horizon ruin problem based on our Gaussian approximation (see Section 3). We distinguish our contribution from the classical risk theory and standard actuarial risk models in a few ways. First, our model explains how individual insurance policies lead to certain features of the aggregate risk process. The construction of our risk process depends intricately on the premium and benefit structure of single policies. This means that different types of insurance, such as whole life insurance, term life, endowment etc. would lead to different correlation structure of our resulting Gaussian process. This is in sharp contrast to the current model in risk theory, where premium and claim processes are modeled separately, each as a drifted random walk (or its variants) and marked point process. This feature can potentially provide a framework to analyze the effect of contract structure on the firm-wide risk level. Second, our model allows naturally the incorporation of actuarial reserve in our approximation. Indeed, the finite-horizon ruin problem that we formulate in Section 3 will involve the calculation of prospective reserve. Third, since serial correlation is explicitly computable, this provides a way to capture the fluctuation of our approximating process over time, which can be potentially applicable to dynamically monitoring mismatch on the insurer’s balance sheet with regard to statistical error. In a more organized fashion, we summarize our contributions as follows: (A) Under the assumption of large number of policyholders, we construct the fluid limit and diffusion limit for the aggregate risk processes. (As we mentioned, these correspond to functional Law of Large Numbers and Central Limit Theorem respectively in the statistics community; throughout the paper we mostly use the former terminology to align with the queueing literature, but will also use the latter interchangeably when necessary.) The risk processes that we are interested in include the insurer’s cash level, liabilities, and per basis reserve level. These will be discussed in Section 2. We prove and numerically demonstrate that these risk processes can be approximated by Gaussian processes with certain correlation structures. (B) Using the theory of Gaussian processes, we illustrate how our result can be used to approximate the ruin probabilities. We model ruin as the situation in which the liabilities surpass the assets (plus the initial capital) within a given time horizon (see Section 3.1). This highlights the flexibility of our methodology in incorporating reserve calculation, and also the dependency on the underlying insurance contracts. In particular, we apply our results to several common types of insurance. (C) Our diffusion approximation shows how, under the Equivalence Principle, the benefit reserve arises as the fluid limit of the empirical cash level per basis at any point in time (see Section 2). These results, we believe, provide a useful perspective into the basic concepts underlying the definition of benefit reserve; see the discussion following Theorem 1. (D) We compute the correlation structures of our limiting processes, thereby showing their tractability. In particular, we illustrate how our approach allows to evaluate and compare the autocorrelation (as a function of time) of risk processes with different insurance types; see Section 4. Let us emphasize that our purpose in applications such as (B) and (C) is to illustrate the concepts behind our ideas, and hence the models we are using in this paper are basic. There are certainly many practical considerations to make the model more realistic. We shall list out these generalizations and more realistic extensions that we believe are worth pursuing in Section 5. In terms of methodology, as aforementioned, we will invoke primarily the machinery in heavy traffic theory i.e. fluid and diffusion approximations in the queueing literature. The ideas date back to Kingman, 1961 and Kingman, 1962 for single-server queues, and they still constitute an active research area among the queueing theorists (see the standard surveys of Whitt, 2002 and Billingsley, 1999 for instance). Under fairly mild assumptions, the tools significantly simplify and single out the important elements of the system dynamics of interest, and provide approximate solutions to many important performance measures (in our context, the ruin probability mentioned in (B) constitutes one such example). More precisely, the results in this paper relate to the analysis of so-called many-server queues, which have been substantially studied in recent years. In these queueing systems, customers arrive and elicit service for a random amount of time, as long as there are available servers. When the number of servers is infinite, every customer can start service right at arrival. Connecting to our work, policyholders can be thought of as customers in the queueing system. While the feature of arrivals is not our focus in this paper, the death time of policyholders is analogous to the end of service, and hence the approximation technique is translatable. Some relevant references on the topic include Pang and Whitt (2010) and Decreusefond and Moyal (2008), which focus on infinite-server models, Halfin and Whitt (1981), Kaspi and Ramanan (2010) and Reed (2009), which study finite but large number of servers in different proportion (or so-called regime) to the number of customers, Puhalskii and Reiman (2000) that study queues with multiclass customers, and Dai et al. (2010) on queues with reneging. The common theme of all these work is the heavy traffic technique being applicable to various features of the queues. Finally, we discuss two papers that use similar approach and highlight our difference. One is a recent working paper by Bensusan and El Karoui (2009), who propose a microstructural approach to model population dynamics to capture mortality/longevity risk. Their motivation is different from ours: instead of building our mortality distribution microstructurally, we make common assumptions on mortality; instead, our focus is on how this mortality assumption, under the interaction with the contract structure, benefit level and premium calculation, leads to a macroscopic fluctuation of total assets, liabilities and other actuarial quantities. Secondly, we note that diffusion approximation has been invoked by Iglehart (1969) in arguing the use of Brownian motion in modeling insurance risk process. However, he maintained a Cramer–Lundberg framework by assuming compound Poisson claims and constantly drifted premium, and showed that under certain scaling their difference converges to a diffusion process. Contract structure, relation between premium and benefit, and actuarial reserve etc. were not considered in his work. The organization of this paper is as follows. In Section 1 we lay out our model assumptions and define the key quantities that we approximate. Section 2 is devoted to the statement of our main result and its discussion. Section 3 relates to applications in ruin probability computations and shows some examples. Section 4 identifies the autocorrelation structure of our approximating Gaussian processes. Section 5 discusses some extensions. Appendix constitutes an appendix, which is divided into two parts. The first part discusses basic facts about heavy traffic limit theorems and gives the proof of our main result; the second part contains a discussion on the simulation methodology that is used to generate various examples in this paper.

نتیجه گیری انگلیسی

In order to describe our results we need to recall the definition of Brownian bridge, an important process obtained out of conditioning the value of Brownian motion at time 1. We introduce (W0(t),0≤t≤1)(W0(t),0≤t≤1) as our notation for a Brownian bridge. It turns out that W0(t)W0(t) is equal in distribution to W(t)−tW(1)W(t)−tW(1), where W(t)W(t) is a Brownian motion. It is also the unique Gaussian process with mean 0 and covariance function View the MathML sourceCov(W0(s),W0(t))=s(1−t),s≤t. This implies that we can write the identities in distribution (for whole stochastic processes) equation(6) View the MathML sourceW0(F(⋅))=DW(F(⋅))−F(⋅)W(1)=D∫0⋅f(s)dW(s)−F(⋅)∫0Tf(s)dW(s). Turn MathJax on See, for example, Steele (2001) and Karatzas and Shreve (2008). We are now ready to state and discuss our results. They are formulated in terms of weak convergence in a useful topology on spaces of functions, called the Skorokhod topology. The discussion of this topology and its preliminary theorems will be discussed in the Appendix. Our main result provides a joint approximation to the Total Cash Process, Total Reserve Process and Average Cash Process. The proof is given in the Appendix. Theorem 1. Assume that the Equivalence Principle holds and therefore that the identity (4) is in force. Regarding View the MathML source(Cn(⋅),C̄n(⋅),Vn(⋅))as elements in D[0,T]×D[0,T]×D[0,T−ϵ]D[0,T]×D[0,T]×D[0,T−ϵ]for any ϵ∈(0,T)ϵ∈(0,T)equipped with Skorokhod product topology, we have that equation(7) View the MathML source(Cn(⋅)/n,C̄n(⋅)/n,Vn(⋅))⇒(m(⋅),m̄(⋅),V(⋅)) Turn MathJax on as n→∞n→∞. Moreover, equation(8) View the MathML source(n(Cn(⋅)/n−m(⋅)),n(C̄n(⋅)/n−m̄(⋅)),n(Vn(⋅)−V(⋅)))⇒(eδt[∫0tH(s)dW0(F(s))−P(t)W0(F(t))],W0(F(t))eδt[∫tTH(s)ft(s)ds−P(t)],eδtF̄(t)[∫0tH(s)dW0(F(s))−P(t)W0(F(t))]+V(t)F̄(t)W0(F(t))) Turn MathJax on as n→∞n→∞. The ϵ>0ϵ>0 in the theorem is to avoid zero divider at time TT. The approximation in (8) suggests that when nn is large, the Total Cash Process can be approximated by equation(9) View the MathML sourceCn(t)≈nm(t)+neδt[∫0tH(s)dW0(F(s))−P(t)W0(F(t))]. Turn MathJax on Simultaneously, we have that the Total Reserve Process admits the approximation equation(10) View the MathML sourceC̄n(t)≈nm̄(t)+nW0(F(t))eδt[∫tTH(s)ft(s)ds−P(t)], Turn MathJax on and that the Average Cash Process is approximated by equation(11) View the MathML sourceVn(t)≈V(t)+1n{eδtF̄(t)[∫0tH(s)dW0(F(s))−P(t)W0(F(t))]+V(t)F̄(t)W0(F(t))}. Turn MathJax on The first two processes can be interpreted as the insurer’s total asset and total liability respectively. The fluctuation around the average in the these processes is smallest at the two ends of the time horizon, namely, at time 0 and at TT, since we know for sure that there are 0 and nn decrements respectively; the fluctuations become larger in the middle of the time range. The maximum fluctuation of the net asset process, obtained as the difference of the Total Cash Process and the Total Reserve Process, will occur at a time t∗t∗ which is characterized in Section 3.1. The approximation (8) is joint in function space, so thanks to the continuous mapping principle (Theorem 2 in the Appendix), we can approximate the distribution of a whole (continuous) functional of the sample paths Cn(⋅)Cn(⋅) and View the MathML sourceC̄n(⋅). This is precisely the significance of the previous result. As a particular application, we will show in the next section how to exploit the continuous mapping principle to estimate the ruin probabilities under different types of life insurance contracts. In Section 4 we will provide closed-form formulas for the joint correlation of the limiting Gaussian processes in the right hand side of (8); thereby fully characterizing the whole asymptotic distribution of assets and liabilities across time. The approximation dictated by the third component, namely Vn(⋅)Vn(⋅), provides a link between our stochastic formulation for a large pool of policyholders and the classical reserve evaluation V(t)V(t). It also provides support for the use of the Equivalence Principle from a micro-structural perspective. In particular, we show that under the Equivalence Principle the individual cash accounts fluctuate around the benefit reserve as the number of policyholders increases. Moreover, the result provides a Central Limit Theorem correction. We envision that our results in this section are potentially useful in evaluating in practice whether the difference between assets and liabilities on the balance sheet is within normal statistical error, although such application certainly requires being able to include other stylized features (such as investments in risky assets and so forth), which we plan to investigate in the future. To illustrate Theorem 1 and the approximations (9), (10) and (11), consider a batch of n=1000n=1000 policyholders, each with a mortality distribution following a mixture of uniform distribution. Such mixture distribution arises as linear interpolation of the life table (see Bowers et al., 1997). For illustration, consider T=50T=50 i.e. a maximum life span of 50 years from present, and a monthly precision of the life table k=50×12k=50×12. More specifically, we have the density of death random variable given by View the MathML sourcef(t)=∑j=1kp(j)kTI(Tk(j−1)≤t<Tkj). Turn MathJax on We further assume that p(j)p(j) follows a discretized Gompertz’s law given by View the MathML sourcep(j)∝e−e0.001(j−1)−e−e0.001j,j=1,…,k. Turn MathJax on Moreover, we assume that δ=0.01δ=0.01. The following graphs compare each of our approximations (9), (10) and (11) to the actual dynamic. Fig. 1(a) shows 100 sample paths of Cn(t)Cn(t) i.e. the Total Cash Process generated by n=1000n=1000 policyholders with the aforementioned mortality distribution. Fig. 1(b) shows 100 sample paths of our Gaussian approximation, namely the right hand side of (9), by generating a sequence of Gaussian random variables. As shown in the graphs, the sample paths behave very similarly between the actual and our approximate processes.Along the same line, Fig. 2(a) shows 100 sample paths of View the MathML sourceC̄n(t) i.e. the Total Reserve Process by 1000 policyholders, whereas Fig. 2(b) shows the counterpart using our approximation in the right hand side of (10). Again, the sample paths behave similarly between the actual and approximate processes.Fig. 3(a) shows 100 sample paths of Vn(t)Vn(t), the Average Case Process, and also the value of benefit reserve V(t)V(t). The sample paths of Vn(t)Vn(t) center around V(t)V(t), which is guaranteed by the Law of Large Numbers (7) in Theorem 1. Moreover, we can approximate Vn(t)Vn(t) by our Gaussian approximation in (11). This is illustrated by Fig. 3(b), which again shows 100 sample paths of the right hand side of (3(b)) using Gaussian random variables.From both Fig. 3(a) and (b) we see that the fluctuation of the Average Cash Process around the deterministic benefit reserve grows as time approaches TT. In both figures, the Average Cash Process is close to the reserve only before t=30t=30. This phenomenon occurs because the Average Cash Process involves dividing by the number of survivals in the portfolio, and as time goes on, this number decreases to 0 almost surely. As a result, the process can attain very small or large values. It is worth noting that this phenomenon has nothing to do with our approximation i.e. it is an intrinsic property of the Average Cash Process itself from its definition laid out in Section 1 (as can be seen by the similar behavior of both Fig. 3(a) and (b)). Moreover, Theorem 1 makes clear that the Functional Law of Large Numbers and Central Limit Theorem works for the Average Cash Process only on [0,T−ϵ][0,T−ϵ] for some prefixed ϵ>0ϵ>0, hence excluding the period of time close to TT. We explain how to implement the simulation procedure in Appendix A.2 to generate the approximations (9), (10) and (11).