There’s a natural asymmetry in defined benefit (DB) pension schemes, compared with defined contribution (DC) schemes: upside is capped because trustees don’t need to pay more than 100% of promised benefits. The implications of this cap for your investment strategy are not necessarily as straightforward as you may think.
Conventional wisdom dictates that it makes sense for DB schemes to gradually de-risk out of growth assets as funding levels improve, because the upside potential available becomes smaller relative to the downside risk. However, an alternative idea is that you should only de-risk on reaching 100% funding. In this article, we explore the logic behind this idea and the implications for trustees.
Imagine your scheme is 95% funded on a buyout basis. That’s a good position to be in, and you might be considering locking in gains if you haven’t already. But if you look just a short period into the future – say, one month – the chance of reaching 100% funding in that period is actually very small. And given an even higher funding level, such as 99%, we can make our monitoring period even shorter – a day – and again the chance of reaching full funding over that period is very small.
No matter how close you get to 100% funding, provided you haven’t reached it yet, you can find a time horizon short enough that the chance of the cap applying over that period is negligible. Essentially, with continuous monitoring and a trigger in place, you can ensure you’re only ever running growth risk that’s uncapped and rewarded. So the fact that trustees don’t need (or want) to pay more than 100% of the benefits promised is not, in itself, a reason to run less risk at a 95% funding level than at a lower funding level such as 50%.
A simplified model
To test this idea we set up a highly simplified model where there’s an obligation to pay a benefit in 20 years’ time. Assuming a Sharpe ratio of 0.4 from a mix of a diversified growth strategy and liability-driven investment (LDI) and that you can’t expect to generate more than 5% per annum over the risk-free rate on scheme assets*, the chart below shows how the ideal return target varies with the initial funding level. To assess 'ideal' we sought to maximise a long-term 'ultimate success' metric we call the expected proportion of benefits met (EPBM), which is calculated based on thousands of simulations of the future.
So the more frequent the monitoring of whether you can ‘buy out’, the higher the funding level before de-risking starts (and you target a lower return). In the limit you would only de-risk on reaching full buyout funding, confirming the theory.
Hang on, this doesn’t feel right!
This is, admittedly, a highly counterintuitive result and not what trustees do! To understand why schemes de-risk in practice, we need to understand the underlying assumptions in our simple model (particularly any unrealistic ones) and look at some of the behavioural factors involved.
Starting with the assumptions, an obvious point is that although monitoring and reacting quickly to the buyout position improves outcomes, it’s challenging to do in practice. It isn’t something that can be done in real time, you can’t be sure if you’ve reached 100% buyout funding, and you can’t instantly transact with an insurer. The greater the uncertainty and the greater the lag, the higher the chance that you overshoot full funding and miss your chance to lock it in.
The above analysis also assumes that returns can be generated equally efficiently regardless of the target. This isn’t necessarily a terrible assumption in general, albeit many believe that lower returns over the risk-free rate can be generated more efficiently than higher ones. However, DB schemes have a trick up their sleeves: cashflow-matching credit can be highly attractive in a liability-driven context. Allowing a scheme to allocate to this, rather than being stuck with 'barbell' strategies of only growth and LDI, doesn’t break the argument above. However, for a given review frequency (that isn’t unrealistically high) it does lead to a stronger drive to de-risk versus a barbell strategy. The chart below illustrates this with monthly monitoring of the buyout trigger.
Longevity and other demographic risks, ignored in the analysis above, also make life more complicated. Other complexities include buyout being more prudent than a self-sufficiency basis such as gilts (at which it might be fine to de-risk) and the influence of covenant risk.
A key behavioural driver for de-risking is a fear of regret. You might feel silly if you were at 99% funding, hadn’t de-risked, and markets tanked leading to the funding level dropping to 89%. You could (rightly) argue that this isn’t worse than if the funding level fell from 89% to 79%, but the sense that you’d snatched defeat from the jaws of victory would be hard to escape. A sort of completion bias is at play.
A related reason is that short-term risk also matters in practice. At very high funding levels you can sometimes get virtually as good a range of ultimate outcomes almost regardless of strategy. If this is the case, you might as well pick the lowest volatility one – only a purist would focus solely on measures of ultimate success.
In this blog, we’ve focused on the influence of funding level in isolation. However, glidepath construction more broadly is a really fascinating (and sometimes mind-boggling!) topic with many rational and behavioural factors at play. For example, for both DB and DC schemes it is normal to de-risk with shrinking duration (i.e. not just increasing funding level) due to the influence of loss aversion.
It’s all about the monitoring. With loss aversion, frequent monitoring can lead to reckless prudence. But when it comes to monitoring the funding level for a potential buyout, the more often the better!
Note: many thanks go to Nic Barnes, Co-Chief Investment Officer at the RBS Group Pension Fund, for useful discussions on this topic.
*In practice it would likely be lower than this, for example due to a need to hold collateral for LDI. This is just a crude example to illustrate a point.