Risk Modeling Lessons of the Financial Crisis
The credit crisis exposed technical flaws in financial institutions’ risk models and a widespread over-reliance on their results. We look at what insurers and reinsurers can learn from these hard lessons.
When one considers the role that risk modeling played in the roots of the financial crisis, there were very clearly a number of common technical flaws in the models. However, the way the models were used and managed played a far more important role than the technical problems themselves. While these issues centered around investment banking activities, the broad lessons learned are equally applicable to insurance and reinsurance.
What were the technical weaknesses of the models?
The credit crisis uncovered numerous technical weaknesses, culminating in an exponential accumulation of error and under-estimation of both the probability of loss and risk severity. Specific weaknesses predominantly related to the following:
How were the models mismanaged?
- Unmodeled risks. There were two related risks that were generally not included:
- Liquidity risk associated with severe, systemic decreases in asset values under extreme market conditions, neither the systemic impact of other institutions being in distress nor the callability of loans.
- Tie-in risk, i.e. capital or assets tied-in via subsidiary regulatory constraints, collateral requirements and collateral triggers.
- Poor calibration. The models were calibrated during relatively short, prosperous time periods, i.e. Alan Greenspan’s “times of exhuberance”, ignoring extreme market drops or stress scenarios.
- Failure to consider changing risk dynamics and increased exposure to systemic risk. Over the past decade, banking risks have evolved and the financial system has become far more systemic. Risks that were previously less severe and weakly correlated, such as subprime mortgages, credit default swaps and collateralized debt obligations, grew substantially and were packaged and extensively traded, generating a severe systemic risk which was not recognized within the models or duly considered by all end-users.
- Lack of extreme tail event credibility, i.e. Nicholas Taleb’s “black swans” or Rumsfeld’s “unknown unknowns”! The tail of most complex risk distributions generally has very limited credibility. Triggers of extreme tail events are typically unique and unforseeable, and not necessarily related to any systemic process nor to the rest of the risk distribution. Past data, a primary feed for risk models, typically has no credibility in the extreme tail. So, overall extreme tail risk is not reliably quantifiable. The further one looks into the tail, typically the less credible it is.
In hindsight, it is obvious that the risk models had problems. Not only were there numerous technical flaws, but the various flaws exacerbated each other. However, what’s surprising is not that the risk models had these limitations, but rather that so many of the world’s most sophisticated financial institutions chose to ignore the inevitable limitations. Over the past ten years, huge risk positions were accumulated with extreme leverage and counterparty risk far beyond historical standards. Entire business strategies, including compensation systems, were based on optimizing portfolio model outputs. Management seems to have justified these risk accumulations by accepting model output as reality without due consideration to the limitations. Ultimately, this led to disaster.
As an example of the misuse of risk models, some compensation systems were developed that relied on optimizing a modeled Value at Risk (VaR)¹ while ignoring significant increases in the unmeasured 1% extreme tail. Rather than basing performance measurement only on returns, such a system would divide a return measure by the modeled 99% VaR to adjust for the level of risk assumed by the manager. A risk-adjusted return measure seems appropriate. However, this led to a practice that became known as “stuffing risks into the tail” where dealers would construct credit default swaps and options that would only trigger with a modeled probabililty of less than 1%. As a result, the 99% VaR would not change, implying no increase in risk, while masking the significant accumulation of very low frequency, high severity risk.
Signs that the underlying risk was increasing, such as ballooning balance sheets and a tremendous increase in leverage, were overlooked as management teams tended to rely exclusively on models for their view of risk. The risk management systems generally had no controls on risk that were independent of the models. Furthermore, the likelihood that traders may be “gaming” imperfections in the risk models was not sufficiently addressed within the risk management framework.
What’s important for the future?
First, a risk manager must recognize that there are, and always will be, limitations to risk models, especially in the extreme tail (the most important part of the distribution for risk management purposes). No matter how sophisticated the models become, reality will always be more complex and dynamic; the risk environment is constantly evolving.
Correspondingly, a risk manager should view risk model output as a guide to understanding the risk and as one component of a robust risk management framework. One should not rely exclusively on a model, rather one should supplement the model’s output with:
- effective judgement regarding unmodeled and unknown risks
- absolute risk limits that are independent of risk models
- use of common sense warning signs such as indicators of large increases in leverage or gross risk positions.
All such accommodations will come at the apparent cost of reduced “modeled optimization”, but are in fact the only sure way to achieving just that.
Value at Risk (VaR) is the lowest annual aggregated loss amount at a given probability of exceedance.