Definition of systemic risk was far from decided.
The general idea
No single definition of systemic risk will cover all situations. Usually what people mean is that there is a mismatch between actual and imagined risk and that the effects of realising this mismatch are contagious. The effect is then realised at a system level.
In banking, several mechanisms have been proposed. But in essence they all have the same origin – assets don’t have the value they were supposed to have. Banking has the innate feature that asset values are leveraged and so the effect of mispricing is automatically amplified; an example of positive feedback, especially if further leveraging is used to cover the discrepancy. Errors are sometimes corrected by selling into a falling market, another cause of positive feedback.
So, what we are looking for in insurance is:
a mechanism of mispricing, where multiple institutions are affected and where mispricing and or its effects are increased via a positive feedback mechanism.
Sources of mispricing?
Widely used risk models (e.g. Cat models) which form the basis of pricing and reinsurance are an obvious cause of concern in insurance, but do they have a positive feedback effect?
Emerging liability risks affect the whole market, but do they have a positive feedback effect?
Gradual drift in claims settlement pricing models may be driven by an aversion to frictional costs. The drift will always produce inflation in claims and therefore premium – until someone says “enough is enough”. e.g. whiplash.
Part of the definition of systemic risk must include a ‘so what’. A very small systemic error would be one which had no effect on market behaviour or performance. A catastrophic error would shut down the market. It is clear from this that each party should define systemic risk in its own terms and then, seek it out. Share price, deviation from business plan by x%, tax payer bail-out would be of interest to different observers.
An accurate definition and a grasp of mechanism will lead to options for intervention. Obvious choices thus far are:
- to close the gap between real and imagined risk,
- to dampen the ‘contagion’ mechanism,
- to be more resilient to shocks
Mind the gap
Competition between model providers ought to ensure that inaccurate models are soon eliminated, but this depends on anyone noticing that the model is inaccurate and caring enough about it to actually choose another model. Selection advantage has to outweigh the cost of change.
The race to the lowest common denominator is also a powerful force in the market. If change would lead to a higher price, market share would suffer.
Several disciplines such as chemistry, materials science and ecology already have mathematical methods which model discontinuities based on positive feedback mechanisms. In chemistry these discontinuities are called explosions, in materials science an example would be drop formation, in ecology~ extinction. Economists are seeking to learn from these coupled rate equation models.
Behavioural psychologists also wish to have their say – in effect they discuss the behavioural component of the strength of the coupling mechanism.
Homogeneity of risk management strategy, tools and behaviours increases the risk of synchronous realisation of mispricing.
The ‘Lead Insurer effect’ could be the cause of homogeneity.
Ecologists have discovered that strongly dis assortative networks are the best way to spread contagion. A regulator might come to rely on this as a way of preventing contagion; focussing on market leaders, to spread the contagion of good behaviour.
Increasing the statistical variance of business plans, share price projections, reinsurance requirements, capitalisation etc. will increase resilience, yet all such changes come at a price. Uncertainty costs money.
Yet, resilience is a key component of any business model. The Board should set out what degree of variance is acceptable and set out to ensure the requirement is met. If models are the main cause of uncertainty then this is the area to focus upon.
One way to encourage resilience planning would be to add its effects onto the asset list as opposed to the liability list. The effective worth of an investment in resilience would need to be known. But a word of warning, this sounds very similar to the recent cause of the problem in the banking sector. Any positive feedback effects in the calculation of effective worth would seem to risk creating a new bubble.
As models become more widely used, the expertise which led to their development is eventually replaced in the market by model users whose understanding is generated from observing the models and not from observing reality. An error in the model would not be apparent until too late.
Knowledge and judgment take years to develop but if available; can be imported in a matter of weeks from the employment market. If so, the judgment based resilience time-scale is therefore weeks. If not available, the resilience time-scale is years.
One thing to consider is how long it would be before a model-related systemic risk is likely to manifest?
What should be apparent by now is that the effective size of the modelling risk is a key question. Only once this is known would anyone pursue it through investment in intervention.
A pragmatic approach
Models such as Cat models and capital adequacy models have been in use for several years. There is a long enough history of change in modelled exposure estimates and associated business response to detect correlations between them. For example if a 5% increase in modelled exposure leads to a 5% change in capitalisation etc. then the sensitivity to model error would appear to be linear i.e. no evidence of in-house positive feedback. The size of the problem may be contained to the size of the error itself.
Since there is reputed to be an effective monopoly in some forms of modelling, the correlation can be tested at market level too.
But how big is the model error? If all models follow the same logic and use the same data then their predictions etc. should be the same. Variation would reflect the range associated with calculation error. This would not tell you that the logic or data was correct. To test that, you need models based on different logic and different data. In the end you will use your judgment to decide which logic and which data you prefer, but having run several variants you will have a better idea of the true spread in best estimates. You would NOT simply adopt the precision suggested by the preferred logic and data combination.
It has been observed that modellers that produce unpopular answers tend to refine their models until the answers are acceptable. Convergence is therefore a strong temptation, leading to market distortion. Only by bringing the modelling in-house can such commercial realities be explicitly managed.
Liability emerging risks
Where cover includes emerging risks, complete reliance on experience-rating would be an example of an erroneous pricing model.
Mind the gap
Emerging risk identification and evaluation tools are needed if assumed exposure is to be close to reality.
Diluting the effect of an emerging risk could be achieved by ensuring a high proportion of predictable loss. But then, you would need to know how big the emerging risk was if you are to be sure of achieving this.
If all liability insurers are in the same position then there is systemic underpricing and a risk of systemic problems.