Pricing Mortality Swaps using R - Part 3

In the final article of this technical series we generate outputs to estimate the value of mortality swaps and discuss the concept of pricing tail risk.

In the previous two articles we spent a lot of time and effort to set the stage for pricing mortality swaps. Now we can run simulations and use the outputs to calculate a value for the swap based on the inputs used.

The choice of inputs is hugely important to the swap:

  • our choice of discount rate
  • our choice of life table for the APV calculations, and
  • our choice of life-table for the mortality simulation

...will have a massive effect on the final numbers produced.

Note that we can also use two separate life-tables in the calculation: one to determine how much of the annuity is guaranteed, and a different one for the mortality in each simulation. For the purposes of simplicity we have been implicitly assuming these to be the same, but that does not need to be the case.

Once we have simulation results, we then need to determine how to assemble these into a final number - the value of the swap. This is itself an interesting problem, and proved to be much more profound than I initially anticipated, but we will get to that in due course.

For now, we focus on using the pieces we have to run our simulations.

Brief aside:

Before we start, I do not intend to get into detail of how the code in mcmortswap works, the remainder of this article will only discuss the output and what to do with it.

If interested, please get in touch with me - the code is up in a public BitBucket repository and it is something I intend to work on from time to time, so I'm happy to field any comments and requests you might have. The code itself is not the cleanest, but there is not a huge amount of it, so should be quite easy to read and understand.


Simulating the Swap

As discussed in Part 2, our intent is to

  1. simulate per year, the mortality of each annuitant, based on the age and MUR Class assigned, and then
  2. simulate the cashflows for each annuity by year.

We are treating all these events as independent once we have set our probabilities, and so we have a certain amount of leeway in how we do this, as well as how we aggregate across annuities, years and simulations.

In other words, what is it we consider to be a single simulation?

  • is each simulation a realisation of each annuity across the lifetime of the swap, or
  • is it instead the aggregated cashflows of all the annuities in the portfolio for a single year?

As you might imagine, the choice of this is somewhat arbitrary and discretionary, and I chose to consider the second option: a single iteration of the simulation to be the aggregated cashflow of the portfolio of annuities across all the years. I did however add some code to enable to output of each individual datum, should it be required.1

We can now run the code to calculate the swap

n.sim <- 100000;

mortswap.value.sim <- calculate.mortality.swap.price(mortport.dt,  
    lifetable           = lifetable.dt,
    hedge.apv.cashflows = TRUE,
    interest.rate       = 0.05,
    years.in.force      = 20,
    n.sim               = n.sim,
    return.all.data     = FALSE,
    verbose             = TRUE);

x_plot <- (1:n.sim) / n.sim;

quantile_plot <- qplot(x_plot, sort(mortswap.value.sim), geom = 'line',  
        xlab = 'Quantile', ylab = 'Sim Value') +
    scale_y_continuous(label = dollar);

Most of the arguments are pretty straightforward. If return.all.data == TRUE the return value is a list of simulation data for each annuity in the portfolio.

In the quantile plot we see an order plot of every outcome of the simulation plotted in order. The density plot clearly shows the long tail.

Working with the Output

Recall that for this swap, only the mortality-adjusted is guaranteed. The value crosses the zero line at about 0.55, which means that the chances of the swap incurring a loss to the seller is less than 50% - the swap seller receives the full amount of each annuity and only pays the buyer the mortality-adjusted amount, keeping the difference as additional profit on top of any premium charged.

The quantile plot is a close approximation to a straight line until about 0.85 or so, at which point the tail risk of the guarantee really starts to manifest. It is obvious that the distribution of these values is both highly skewed and heavy-tailed.2

My original use of Monte Carlo simulation was for option pricing, so my first instinct is to just take the mean of the output values and see if that makes sense as a price for the swap.

$> summary(mortswap.value.sim)
     Min.  1st Qu.   Median     Mean  3rd Qu.     Max.
 -2240000 -1000000  -141000     3730   845000  7720000

A quick look at various outputs and some thought quickly revealed my error. In the event that we are discounting by the same lifetable used for estimating the curtate mortalities, the expected value of the swap is zero, since it all cancels out. The above code shows this, sample error means it is not exactly zero, but it is close enough.

This is unrealistic - it is not credible for a counterparty to assume the tail risk for no initial premium, only making a profit from entering the swap if things go well. The swap is essentially an insurance contract, so we need to get some idea of the expected costs associated with selling that guarantee in order to price it.

One quick and dirty solution is to consider only expected losses for the purposes of pricing. We limit the output of each simulation to a hard floor of 0, and recalculate the mean. This is similar to how option pricing works, so is worth having a look at:

$> summary(mortswap.value.sim.pos)
    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
       0       0       0  540000  845000 7720000

So, if we treat the swap like an option, one possible value for the swap is about 540,000 USD.

Looking at both the quantile and PDF plots though, this approach does not 'feel' right. With such a huge tail in the data, it seems to me that this problem is really another pricing-tail-risk problem in disguise, so that is the avenue we will pursue for now.

If nothing else, getting a better understanding of the assumed tail risk is likely very important from a risk management perspective.


Pricing Tail Risk

Pricing tail risk is the teenage sex of financial data modelling: a lot of people talk about it, you have never met someone who does it, but the evidence suggests it is happening so the ones who are actually doing it must be keeping quiet about it.

Once I realised I had a tail risk problem, I remembered a Michael Lewis article from 2007 called On Nature's Casino discussing hurricane insurance and other natural disaster insurance. The article was fascinating (as is most of Lewis's writing) and I had a vague memory that he broached the topic of pricing rare events in the article.

Having dug it out of Google, the probabilistic explanation given seemed flawed to me, but the underlying principle was simple. Across multiple areas of endeavour, the market price of tail risk converges on about four times the expected loss.3

This heuristic appealed to me a lot, for the simple reason that I am biased towards simple but cautious models when the situation is complicated, and this rule of thumb meets both criteria, at least at first glance.

Tail risk is such an interesting and universal problem that I naively assumed there would be at least a modest body of work out there dealing with the issue, but no.

A few days of increasingly frustrated internet trawling revealed quite a bit of material on a topic called Extreme Value Theory, but that seemed mainly focussed on modelling the tail using the sparse data you have on that part of the distribution.

In our case, the simulation is cheap to run, so we can have a lot of data in the tail should we wish it. What we need is a way to turn the knowledge we do have of the tail into a single price we can charge to assume the risk.

I spoke to a few people in various businesses who might be able to shed some light, but pricing tail risk seems to still be something of a dark art, so I was once again left to make something up, and hope it appears reasonable.

This is the downside of working on an abstract problem: having no constraints is liberating, but with the severe consequence of the lack of external domain knowledge and advice to help guide choices when faced with seemingly equivalent alternatives.

Quantifying the Tail Risk

The "4x" heuristic discussed in the Lewis article is a start, but does not translate directly. Those articles discussed catastrophe insurance, where the upper limit of the losses was usually written into the contracts: if a hurricane flattens your house the policy pays 500,000 USD.

If the chance of a hurricane hitting your house in a year is 0.01% (i.e. 1 chance in 10,000), then the expected loss is 50 USD, and the premium charged is thus 200 USD.

In the case of the swap, the metric is not as obvious, since we have a distribution of losses to choose from. It seems sensible to somehow use the tail values of the simulations to help us answer these questions.

I can imagine two different methods for using this tail data, there may well be more, which may or not work out to be similar.

First, we can just pick a quantile in the right tail of the distribution and see how big the loss is at that point. This ties in to concepts introduced in Solvency II-type metrics such as "once in 200 year events" i.e. the 99.5% quantile value.

The other method, which I think I prefer from a conceptual point of view, is to take conditional means on the tail, asking questions like "what is our average loss over the worst 5% of situations?"

Both types of calculation are very straightforward in R, quantiles are calculated with the quantile function, so we can get the 99.5% quantile easily. And with a bit of indexing, it is straightforward to calculate tail means as well:

$> quantile(mortswap.value.sim, 0.995)
   99.5%
[1] 4243386

$> mean(mortswap.value.sim[mortswap.value.sim >= quantile(mortswap.value.sim, 0.95)])
[1] 3245675

So, the 1-in-200-years number is about 4.25 million USD, and the 0.95 quantile mean is 3.25 million USD: not an insignificant difference.

To get an idea of how the quantile mean scales with quantile value, we can do a plot of this against the quantile level:

The quantile mean increases as the level increases (as would be expected) but this does not mean that the price increases since we are multiplying the increasing mean value against a diminishing probability - the probability of having losses greater than or equal to the q-th percentile is (1-q).

Quantile level price

So, as you go further up the tail in terms of pricing, your premium actually reduces.

For this reason, I think the 0.95 quantile is a reasonable target, covering a lot of the worst case scenarios in terms of losses while also ensuring an acceptable amount of initial premium is received up front.

At this point we are largely done, and as promised, this article does not provide any particular hard answers. What we have shown is more of the process of attacking a problem, and generating some useful insights and considerations.


Further Work

There is a lot more work to be done in this area, and I will quickly talk about some of them before I finish.

Mortality Modelling

All of the work so far has used the same lifetable for both APV and mortality calculations, and this seems a bit simplistic. It is more likely the lifetable for the APV would be set as part of the negotiations, but any level of prudence would dictate a little scenario work on various mortality models.

Our framework provides ample opportunity for doing this: we can do all sorts of interesting mortality forecasting and plug that into our calculation to see the effect. This also naturally leads to some interesting ways to combine scenarios: if we estimated one particular scenario being 20% likely, we use that for 20% of our simulations, for example.

There are a number of packages that help do this type of mortality forecasting - I intend to first look at StMoMo, a package I learned about at R in Insurance 2015.

Insuring Annuities

Another very interesting concept is to use mortality swaps to insure against longevity risk. Rather than insuring the annuity payments, the swap insures the difference. Thus, a company is paying out annuities, but they only pay out the APV discounted amount, the swap guarantor makes up the difference. That way, the swap is insuring against longevity: the swap loses money if the expected mortality is much lower than accounted for in the APV calculations.

The benefit of being able to do this is that it allows the seller of swaps to offset their risk: if they sell both mortality and longevity swaps, they should greatly reduce their exposure to macro-level changes in mortality, and ideally make a profit either way.

With that capability, it allows for some very powerful views into the product. You can use it try all sorts of different mortality assumptions and seeing how it affects your overall book, enabling things like stress testing and highly detailed scenario modelling.

Finer-grained Pricing

To keep things simple we assumed very simple detail levels on the annuitants, we only had age and mortality class. More accuracy would be obtained by having more detailed knowledge of the annuitants so we could more properly assess their likely mortality. This may come up against regulatory hurdles and data protection issues, so this may prove to be difficult, but it is definitely possible in theory.

We also have treated cashflows as being both yearly and regular. In reality, many annuities are much messier, and it would be good to treat payments on the monthly basis. This may not have a hugely material effect on the final pricing however, but it is worth investigating.


Conclusions

All of the above work is highly speculative, since when first posed to me it simply seemed an interesting problem and was a great excuse to explore the pricing of life insurance.

I have no idea how practical any of the above approach is, and I paid almost no consideration whatsoever to any legal and regulatory issues involved in trading such instruments, assuming that to be out of scope. My simple goal was to get an idea behind the value of such an instrument, and I believe that is now demonstrated well.

Reserving is another important part of this, but I know very little about it so I have avoided discussing it at all. From the little I do know though, I imagine a similar approach would yield reasonable reserving calculations.

If any of the above interests you, or you have spotted flaws, please do get in touch.



  1. While available, this facility does come with a hazard warning: the output of the data gets sizeable fast. For a 20-year swap with 200 annuities in the portfolio and 10,000 simulations, the object is about 300MB in size.

  2. To get the multiple plots code working at the time of writing (Sep 2015) I had to install the github version of ggplot2 as the behaviour of ggsave() has changed and did not work well with grid.arrange(). These issues may be resolved when you try to get this working.

  3. I will not go into too much detail about the flaws I perceived, probability does not come easily to most people and he was trying to explain a technical point. My assumption was that the result was good, but the technical explanation got a bit garbled in trying to simplify it.

Mick Cooney

Mick is highly experienced in probabilistic programming, high performance computing and financial modelling for derivatives analysis and volatility trading.