In our last column (Danielsson and Macrae 2011), we consider the three key problems arising from data snooping, error maximisation, and extreme forecasts. All of these will arise to some extent in most practical situations; this fact must be taken into account when we consider the proper use of these models.
Risk models are used in four different (though overlapping) situations:
- Routine understanding of risk by banks and supervisors;
- Routine management and control of risk by a banks and trading desks;
- Analysis of systemic and regulatory risk;
- Management and control of systemic risk by supervisors.
We consider these in turn.
Routine understanding of risk by banks and supervisors
The easiest situation is where risk models are used to understand rather than constrain risk, and where risk assessments concern routine rather than extreme risks hence making large amounts of relevant data available.
Because portfolios are not constrained by the models, error maximisation is not an issue, and complex models with good expressive power and a good fit to historic data can be employed, while large sample sizes keep problems of data snooping and over-fitting to a moderate size.
These circumstances are ideal for off-the-shelf models and they should be expected to perform well in this role. Any failures should not be systemically important. Unfortunately such uses of models, although common in academic studies, are likely to be rare in practice.
Routine management of risk by financial institutions
When someone in industry models risk, it is generally to control risk. This introduces error maximisation. The more risk models are used to constrain risk taking, the more fragile error maximisation makes them. Consequently, one should be especially careful to avoid the problem of data snooping.
This suggests particular criteria for evaluating models used to control risk. They do not need fit historic data particularly well. Instead, they need to be robust against currently-unknown future mistakes. A good in-sample fit is not only irrelevant, but even dangerous because it gives more scope for error maximisation. This suggests that models used to constrain risk should be substantially simpler than models used to understand risk. We think this distinction is not widely understood.
Supervisors should be particularly cautious about demanding the use of similar risk models across multiple institutions because by doing so they increase the likelihood that all institutions suffer a failure of risk control at the same time, elevating an individual problem into a systemic one. This was noted by Danielsson and Shin (2003) in their discussion of endogenous risk.
Analysis of systemic risk
The area of systemic risk brings further challenges. Here the question of interest is not the risk of financial institutions failing, but rather the risk of cascading failures. Consequently, a reliable systemic risk model needs to capture the risk of each systemically important institution, as well as their interactions. This will be challenging since such models will reflect the financial system as it is, not as it would be if policymakers acted on the model. Since the models are endogenous to the system they model it is impossible to avoid a substantial subjective judgement on what influence they may have.
Systemic risk is concerned with events that happen during crisis conditions, looking far into the tails of distributions. This makes the paucity of relevant data a major concern, dictating the use of very simple models to keep the adverse effects of over-fitting to reasonable levels. In addition, any reliable systemic risk model needs to address the transition from non-crisis to crisis.
Incentives make the process difficult. During tranquil times, the risk of extreme events is a low priority. Extreme outcomes happen only rarely, perhaps once a decade, and bonus and employment cycles are much shorter than that. It is not in the interest of risk takers, speculating with other peoples’ money, to be overly concerned with extreme risk. Even if the financial institution or the supervisor does have such concerns, such risk taking is difficult to detect when concealed in a turbid alphabet soup of derivatives. In the final analysis, the likelihood of extreme events is often impossible to detect with any model.
Daily 95% or 99% risk levels, such as those in the Basel market risk accords, are of very little direct relevance for systemic risk, and further introduce the problem of error maximisation. We therefore disagree with the widespread assumption that successful models for routine risk can be expected to perform well in systemic risk forecasting. This applies to many systemic risk models currently being proposed by government institutions.
Ultimately, the difficulty of the systemic risk problem suggests that supervisors working on systemic risk should be wary of statistical models of extreme market outcomes. The models may provide a number labelled "systemic risk", but this does not mean that the number has any meaning. Excessive belief in statistical models will lead supervisors to defend against obsolete threats and leave them blind to the new.
Here be dragons: The challenge of controlling extreme risk
Medieval mapmakers often noted the risk of an unknown kind by the notation “here be dragons”. Attempts at controlling extreme risk should come with a similar warning. Just like the sailors of yesteryear, financial institutions will go into unknown territories and, just like the map makers of that era, modern risk modellers have very little to say.
Financial institutions, willingly or not, will assume extreme risk. This cannot be prevented with any cost-effective methods. Nevertheless, the repeated occurrence of extreme events in financial markets implies that such risks need to be contained and managed. After all, following a crisis event there is generally strong political pressure on supervisors and financial institutions to prevent recurrence.
This however leaves open the question as to the extent to which it can be accomplished. We argue above that it is difficult to forecast extreme risk because of over-fitting. Basing risk constraints on the resulting estimates adds the problem of error maximisation, placing any supervisor seeking to constrain extreme risk taking in a very difficult situation.
Financial institutions and supervisors seeking to control extreme risk-taking should be careful in their use of statistical models, and certainly not use them simply as a substitute for trying to understand what trading strategies are being employed and how they might contribute to some future risk. These models must face all the challenges of those used to understand systemic risk, and in addition will face error maximisation.
Models are of course necessary but they should be extremely simple and founded on very basic measures of asset size and risk. They should have very few (or ideally no) parameters estimated from history, because each parameter increases the scope for data snooping and error maximisation. They must make a limited and wary (or no) allowance for hedging, because in a crisis hedging can be expected to fail.
Unfortunately, the current thrust of regulation seems to be to be in the opposite direction, mandating the use of powerful and expressive models that may provide a good fit to historic data, even deep in the tails, and then use those models as a rigorous constraint on risk taking across many portfolios. This is exactly the most fragile approach.
As we started this article with a challenge from a risk manager, we want to use this analysis to make specific recommendations on the use of models.
Most importantly we want to underline the need for understanding. Risk estimates do not exist in a vacuum; they are made for some purpose and based on some model. Sensible estimates cannot be made unless both are understood. Ideally risk managers should create their own models as this is the best way to understand the model's limitations. Outsourcing risk-model creation is outsourcing a key component of risk control and if it is done then substantial efforts must be put into understanding and monitoring the work that has been done.
Any risk forecast should come with robust analysis of forecast uncertainty. This might take the form of the fan charts used by the Bank of England for inflation forecasts. Forecast uncertainty should incorporate statistical uncertainty within the model (such as parameter standard errors), model risk (uncertainty created by using the wrong model) and well as an estimate of the bias introduced by data snooping. Existing statistical methodology allows for such calculations, it is just a matter of mandating their use. Any serious discussion of risk should incorporate explicit estimates of uncertainty.
Of course providing sensitivity analysis for risk forecasts does create problems, particularly making the interpretation of the results harder to communicate to senior management. At the end of the day practical decisions have to be made and few decision-makers are comfortable with the multi-dimensional integrations required when using full distributions rather than point forecasts. Nor can they be expected to enjoy the explicit recognition that every decision may be wrong.
Unfortunately this is the reality of decision making, and obscuring it may make life more comfortable but corrupts the decision-making process. At the very least, there should be lively discussion of uncertainties between risk professionals and any software providers involved, both within individual forms and also in supervisory agencies.
Extreme risk creates additional challenges due to the paucity of relevant data, the correspondingly highly-subjective element in the model, the length of time required to falsify an incorrect model and the seriousness when an extreme event occurs. There is a natural tendency in the industry to sweep problems of extreme risk under the carpet. After all, if the portfolio or the institution does not blow up on my watch, why should I care? Relying on statistical models to assess extreme and systemic risk is likely to provide the false belief that the problem is under control, such as persisted for a decade after Alan Greenspan's "Irrational Exuberance" comment of 1996.
Macroprudential regulations should focus on prevention and resolution of systemic events. This is very different from trying to smooth out risk-taking on a day-to-day basis, and the latter may well be counterproductive if the result is to lower volatility at the cost of producing fatter tails. More attention should be paid to preventative measures that are model-free or depend only on the very simplest models, such as restrictions on loan-to-value ratios, minimum equity capital based on total (not risk-weighted) assets, such as the leverage ratio, and the Basel 3 liquidity constraints. The on-going work on living wills and resolution is very encouraging; even if much remains to be done. (See e.g. Bassani and Trapanese 2011).
Financial risk models, statistical and non-statistical, are essential for the functioning of financial markets and it would be impossible to manage risk without them. However, there is a tendency both by financial institutions and by supervisors to overstate the reliability of models and underplay their dangers, so it is important to identify their limitations.
We have identified several different reasons why risk models fail in practical use. This allows us to make constructive recommendations on where models can be used with confidence, where problems are likely, on what characteristics models should have and how they should be used, and on how the nature of the intended application will influence all these considerations.
Bassani, Giovanni and Maurizio Trapanese (2011), "Crisis Management and Resolution", forthcoming in M Quagliariello and F Cannata (eds.), Banking Regulation, Riskbooks..
Danielsson, J and HS Shin (2003), Endogenous risk. In Modern Risk Management — A History. Risk Books.
Danielsson, Jon and Robert Macrae (2011), “The appropriate use of risk models: Part I”, VoxEU.org, 16 June.