Skip to content

Latest commit

 

History

History
372 lines (273 loc) · 14.2 KB

5_x_pk.rst

File metadata and controls

372 lines (273 loc) · 14.2 KB

The Pollaczeck-Khinchine Formula

Objectives: Illustrate and compute Pollaczeck-Khinchine formula for probability of eventual ruin for a compound Poisson process.

Audience: Advanced users.

Prerequisites: Advanced risk theory.

Contents:

Helpful References

Classical Risk Theory and the Pollaczeck-Khinchine Formula

The Pollaczeck-Khinchine formula determines the probability of eventual ruin in a portfolio where claims are driven by a compound Poisson process, in terms of starting surplus and the premium rate. Losses are generated by a Poisson process with \lambda annual expected claims and iid severity X. Losses up to time t are given by

A(t) = X_1 + \cdots + X_{N(t)},

where N(t) is Poisson with mean \lambda t. Expected loss per year equals \lambda\mathsf{E}[X]. Premium per year equals (1+r)\lambda \mathsf{E}[X] where r is the ratio of profit to expected loss. The corresponding expected loss ratio is 1/(1+r). If r\le 0 then eventual ruin is certain, so assume r>0.

Define the integrated severity distribution by

F_I(x)
&=\frac{1}{\mathsf{E}[X]}\int_0^x S(t)dt \\
&= \frac{\mathsf{LEV}(x)}{\mathsf E[X]} \\
&= 1 -\frac{\mathsf{E}[(X-x)^+]}{\mathsf{E}[X]}

where S is the survival function of X. F_I is a thicker tailed distribution than F. Let

U_{u,r}(t) = u + (1+r)\lambda\mathsf{E}[X]t - A(t)

denote accumulated surplus to time t given starting surplus u. U is called the surplus process. Finally, let

\psi(u, r) = \Pr(U_{u,r}(t) < 0\ \text{for some}\ t\ge 0)

be the probability of eventual ruin.

The Pollaczeck-Khinchine formula says that

\psi(u, r) = 1 - \frac{r}{1+r}\sum_{n\ge 0} (1+r)^{-n}F_I^{n*}(u)

where F_I^{n*} is distribution of the sum of n independent variables with distribution F_I. Note that

G(z) = \frac{r}{1+r}\sum_{n\ge 0} \frac{z^n}{(1+r)^n} = \frac{r}{1+r-z}

is the probability generating function of a geometric distribution M with mean 1/r and \Pr(M=m)=\frac{r}{1+r}\frac{1}{(1+r)^m}. Therefore \psi(u)=\Pr(Y > u) where Y is an aggregate distribution with frequency M and severity F_I. A surprising consequence is that the probability of eventual ruin starting with no surplus, \psi(0)=1-\Pr(Y=0)=1-\Pr(M=0)=\frac{1}{1+r} equals the expected loss ratio!

:cite:t:`Embrechts1997` Section 1.2 shows how to derive the Pollaczeck-Khinchine formula. The key step is to determine the distribution of X-(1+r)T where T is the exponential waiting time between claims, and to observe that ruin can occur only at the moment of a claim.

The Pollaczeck-Khinchine formula gives combinations of u and r that are consistent with a top-down stability requirement expressed as a target probability of eventual ruin. Overlaying a cost of capital provides a link between r and u that determines a minimum viable market size. An example of this method is given below.

Because eventual is the same in days, weeks or years, \psi_{X, m}(u) is independent of the expected claim count \lambda. In unit of time 1/\lambda all portfolios have an expected claim count of one. Therefore \psi^{-1}(p) gives a capital requirement (risk measure) that is a function of severity and not frequency, i.e., it is independent of portfolio size. Unlike most risk measures, it does not regard small portfolios as more risky than large ones.

The Cramer-Lundberg formula is an approximation to \psi that applies for thin tailed severities. It says that

\psi(u, r) \le e^{-ku}

where k>0 is a constant called the adjustment coefficient solving

e^{kP} = \mathsf{E}[e^{kA(1)}]

where P=(1+r)\lambda\mathsf{E}[X] is the premium. Given a top-down stability requirement, we can work backwards from the Cramer-Lundberg formula to determine a premium.

Exercise. Show that if k=-\log(p)/u and premium

P=\frac{1}{k}\log\mathsf{E}[e^{kA(1)}],

then the Cramer-Lundberg formula ensures the probability of eventual ruin is \le p. The properties of P motivate the exponential premium. In turn, the approximation P\approx \mathsf{E}[A(1)] + k\mathsf{Var}(A(1))/2 motivates the variance principle.

Both the Cramer-Lundberg and Pollaczeck-Khinchine formulas assume independent and identically distributed severity and Poisson frequency. These can be reasonable assumptions for the loss process of a small portfolio. The case of a mixed Poisson can be decomposed as a mixture of pure Poisson processes.

FFT Computation

The distribution of Y can be computed using Fast Fourier transforms in the same way as any aggregate distribution. Some care is needed when the margin is very small because the claim count is very large. :class:`Aggregate` includes :meth:`pollaczeck_khinchine` to determine the integrated distribution F_I and convolve it with a geometric frequency.

Using The Pollaczeck-Khinchine Formula I

This section reproduces two examples from the risk vignette for the actuar package.

The first is based on a mean 10 Poisson compound with shape 2 gamma severity. The vignette uses matched-moments discretization and so our numbers do not exactly match, but they are very close. We build the compound, compute some quantiles and tvars, display the density (compare p.8-10). Then using a premium loading of 20% using the expected value premium (p.12), we reproduce the probabilities in Figure 5. Our computation is exact vs. using an approximation.

.. ipython:: python
    :okwarning:

    from aggregate import build, qd
    import matplotlib.pyplot as plt

    a = build('agg Actuar 10 claims sev gamma 2 poisson'
            , bs=0.5, log2=8)
    qd(a)
    ps = [0.25, .5, .75, .9, .95, .975, .99, .995, .999, 1-1e-14]
    qd(pd.DataFrame({'p': ps, 'q': a.q(ps)}), float_format=lambda x: f'{x:10.3f}' if x < 1 else f'{x:10.1f}')
    fig, axs = plt.subplots(1, 3, figsize=(3 * 3.5, 2.45), constrained_layout=True)
    ax, ax0, ax1 = axs.flat
    a.density_df.F.plot(ax=ax, xlim=[0, 60], title='Po-Gamma distribution function');
    qd(a.density_df.p_total.head(20).reset_index(drop=False), float_format=lambda x: f'   {x:<12.5g}' if 0 < x < .1 else f'{x:10.1f}')
    qd(a.tvar([.9, .95, .99]))
    ruins, find_us, mean, dfi  = a.cramer_lundberg(.2)
    ax0.plot(np.cumsum(dfi), label='integrated')
    ax0.plot(a.density_df.p_sev.cumsum(), label='severity')
    ax0.set(xlim=[0, 40], title='Severity and integrated severity distributions')
    ax0.legend(loc='lower right')
    @savefig pz-actuar.png scale=20
    ruins.plot(ax=ax1, xlim=[0, 50],
              title='Probability of eventual ruin against starting surplus',
              ylabel='Probability', xlabel='Starting surplus');


The second uses a Pareto severity, where the integrated distribution can be computed exactly. The following code produces the exact values for the probability of eventual default against starting surplus. All the values fall in the range between lower and upper shown on p.19.

.. ipython:: python
    :okwarning:

    a = build('agg Actuar2 1 claim sev 4 * pareto 5 - 4 fixed')
    qd(a)
    ruins, find_us, mean, dfi  = a.cramer_lundberg(.2)
    ruins.name = 'Prob'
    bit = ruins.loc[np.arange(0, 51, 5)].to_frame()
    bit.index = bit.index.astype(int)
    bit.index.name = 'Initial surplus'
    qd(bit, ff=lambda x: f'{x:.5f}')

The actual work, to get the answer as opposed to formatting the result, is only two lines of code in Aggregate (the first and third) vs. 8 in actuar R.

Using The Pollaczeck-Khinchine Formula II

This section illustrates the theory using a lognormal severity with a mean of 50,000 and a CV of 10 (\sigma=2.15) corresponding to a moderately risky liability line. It compares starting surplus levels for different eventual ruin probabilities assuming a margin r=0.1 with a 1 million and 10 million occurrence limit. It also illustrates simulations of the surplus process in each case with starting surplus calibrated to a 0.05 probability of eventual ruin.

Set up the portfolio.

.. ipython:: python
    :okwarning:

    port = build('port PZTest '
                     'agg Limit1  '
                        '0.1 claims '
                        '1000000 xs 0 '
                        'sev lognorm 50000 cv 10 '
                        'poisson'
                     'agg Limit10 '
                        '0.1 claims '
                        ' 10000000 xs 0 '
                        'sev lognorm 50000 cv 10 '
                        'poisson'
                , bs=500, log2=18, padding=1)
    qd(port)


The left plots show the Pollaczeck-Khinchine formula starting surplus as a function of the eventual ruin probability with margin 0.1 on linear (solid) and log (dashed) scales. The Cramer-Lundberg formula says that the probability of eventual ruin is approximately exponential, which is a straight line on a log scale.

The right column show 500 simulated surplus paths, with \times indicating ruin scenarios. Capital calibrated to 0.05 eventual ruin probability. Time and volume are symmetric in the model, so volume can be regarded as time for a fixed size portfolio or a varying sized portfolio for a fixed time or a combination. Scale indicates cumulative exposure-years.

.. ipython:: python
    :okwarning:

    from aggregate.extensions.pir_figures import fig_9_1
    @savefig pz.png scale=20
    fig_9_1(port)

The right hand plots are computed here with only 100 samples, vs. 500 used in the book, and so the approximation is not as accurate.

These simulations show that the probability of eventual ruin is constrained by the buildup of surplus in most scenarios. Defaults occur early in the simulated history. This model could be appropriate for a mutual company—indeed some mutual companies have accumulated substantial amounts of capital. For a stock company, a more realistic approach adds dividends to manage capital.

Market Scale and Viability

Given severity X and ratio r of margin to expected loss, the Pollaczeck-Khinchine function \psi is monotone and hence invertible, allowing us to find u_{X,r}(p)=\psi_{X,r}^{-1}(p), the starting capital necessary to guarantee probability p of eventual ruin.

The amount of margin equals r\lambda\mathsf{E}[X], where \lambda is the annual expected claim count. Since the expected margin must pay the cost of capital, we get a market viability constraint

r\lambda\mathsf{E}[X] \ge \iota\, u_{X,r}(p)

where \iota is the cost of capital. Each element is influenced by different factors:

  • the hazard and contract design determines X,
  • the insurance product market determines r,
  • the capital markets determine \iota, and
  • a regulator or rating agency determines (or strongly influences) p.

There are two ways to apply this formula.

First, consider a diversifying unit, such as motor liability, where insurers grow by adding new, independent insureds with the same severity. Here, the formula gives a minimum size of market constraint

\lambda \ge \iota \, \frac{u_{X,r}(p)}{r\mathsf{E}[X]}.

This function of four variables and \lambda is:

  • Increasing and linear in \iota: the market must be larger given more expensive capital.
  • Decreasing in r: the market can be smaller with a higher margin.
  • Decreasing in p: the market must be larger to support a stricter capital standard.
  • Independent of expected severity (because \psi is homogeneous in \mathsf{E}[X]) but dependent on the shape of severity (which influences \psi).

It shows the natural scale using the lognormal example. Size, measured by expected annual claim count, is shown for a range of margins, limits, and stability constraints. If claim frequency is 5%, the table shows that a market with 1M limits is reasonable for all p and r. For example, the strictest stability constraint p=0.01 and lowest margin rate r=0.025 needs 39,814 claims, or about 800,000 policies, to be viable. With a 10M limit and same r, the market size needs to be >100,000 claims, or about 2.5 million policies, which is less achievable. However, if the margin rate increases to r=0.1, the market size reduces to 9,088 claims or about 180,000 policies.

.. ipython:: python
    :okwarning:

    from aggregate.extensions.pir_figures import natural_scale
    df = natural_scale(port)
    qd(df, float_format=lambda x: f'{x:,.0f}')

Note: these numbers differ slightly from the book because of the update parameters used for port.

Second, consider a non-diversifying unit, writing catastrophe exposed business, where insurers grow by covering a greater proportion of each event. Severity becomes market share times an industry severity X, and the number of events is fixed. US hurricane reinsurance is an example. In this case, viability is independent of market share and is controlled by whether the inequality has a solution that is acceptable to both the product market and the capital market. Viability is harder to achieve

  • with smaller \lambda: rare events are more difficult to insure,
  • with lower p: higher quality insurance is more expensive,
  • with higher \iota: more costly capital, and
  • with lower r because u increases quickly.