The final shape of the Basel III proposals is increasingly becoming clear, (see Basel Committee 2012, 2013). While the proposals are generally quite technical, the fundamental elements of the market-risk proposals are simple and easily evaluated, providing means to evaluate the quality of the overall proposal.

While the fundamental logic behind the particular regulation changes might seem quite sensible, once the theoretic concepts are implemented empirically, the outcome is no longer as clear and there are reasons why one might expect the opposite outcomes to what was expected, to actually happen.

A quick summary of the changes

The original 1996 version of the market-risk regulations, essentially still in effect, is about to see significant changes, in particular:

- A move to expected shortfall (ES) from Value-at-Risk (VaR);
- The lowering of the confidence level from 99% to 97.5%;
- The holding period is to be calculated by overlapping windows;
- ES is to be adjusted by stressed observations.

Recent research: Quantitative analysis of the proposals

In order to examine the quantitative impact of the proposals, we did a detailed quantitative study, incorporating analytical calculations, Monte Carlo simulations and results from observed data, making use of normal and fat tailed distributions, small and large sample sizes as well as different holding periods approaches (Danielsson 2013). These results represent a best-case scenario, since there are few areas of risk where the key issues are better understood than in core market-risk methodologies. The technical results are reported on www.modelsandrisk.org/basel, and the discussion below draws on that.

ES and 97% confidence levels

The most important change is the switchover from VaR to ES and a reduction in confidence levels from 99% to 97.5%. This is motivated by:

"the current framework’s reliance on VaR as a quantitative risk metric raises a number of issues, most notably the inability of the measure to capture the “tail risk” of the loss distribution. The Committee has therefore decided to use an expected shortfall (ES) measure … ES accounts for the tail risk in a more comprehensive manner, considering both the size and likelihood of losses above a certain threshold." BCBS (2013).

In this, the Committee has changed its view since it rejected ES in favour of VaR in the Basel II process, as reported in the original impact studies conducted by the Bank of Japan, see for instance Yamai and Yoshiba (2002). They found that ES is "not related to the firm’s own default probability. Not easily applied to efficient backtesting method. Not ensured with stable estimation".

The choice of ES does raise interesting and even difficult issues:

- ES is estimated conditional on VaR, giving rise to the possibility that estimation and model risk for ES will be strictly higher than for VaR.

One could also find the opposite, that the smoothing of the tails implies ES is more stable than VaR. We find that the 97.5% ES risk forecasts are generally more volatile than 99% VaR, supporting the former hypothesis. For example, for a Student-t, the 97.5% ES is over 40% more volatile than the 99% VaR counterpart, while for the conditional normal it is more than 10% higher.

- This happens, even if on average 97.5% ES is almost exactly the same as 99% VaR for conditionally normal procedures, by far the most common in industry, with the difference quite small for conditionally fat tailed distributions.

This result holds both in finite and small samples as well as inside the distribution and asymptotically. We verified this with both parametric and nonparametric estimation methods. The same result carry through with observed financial returns.

- It is harder to backtest ES than VaR.

The reason is that for VaR, violations are observable Bernoulli distributed variables, enabling the analyst to apply formal statistical procedures to ascertain if the distribution of the violations conforms to the underlying model. In other words, one compares, model predictions to observed outcomes. This is not the case for ES since there one can only compare a model prediction to a model outcome, raising thorny model-risk issues. While several procedures are available for backtesting ES, they are far away from the quality of the VaR equivalents. This is problematic since science calls for validation, and ultimately one needs to take ES risk forecasts on faith.

- The most common justification for choosing ES is that it is subadditive across all levels of tail thickness, except the most extreme.

However, recent research (Danielsson et. al 2012) finds that VaR is also subadditive, except for the fattest tails, so fat they are highly unlikely to be seen for most asset classes. That means that there is no decision-making advantage to ES over VaR in most cases.

- One area where ES has a clear advantage over VaR is that it is harder to game or manipulate ES. It is straightforward to significantly lower VaR at the expense of fatter tails, but that would be detected by ES.

Holding periods

Shocks often happen over multiple days, and consequently it is desirable to evaluate risk over multi-day time periods – or holding periods in the jargon. This is however difficult in practice because the longer holding period is, the bigger the data sample needs to be. For example, if one needs 500 days for estimation with daily holding periods, one would need 5,000 days, or two decades, for 10 day holding periods. This often will more than exceed the amount of available data.

The original 1996 amendment proposed ten-day holding periods, and recognizing the data difficulty, allowed the use of a square root of a 10 adjustment. While one can argue that this is too high or too low, depending on model assumptions, it is reasonable in most cases.

The new proposals do away with the square root of time approach, instead suggesting using overlapping n-day holding periods. In some cases, it does make sense to use overlapping periods, for example when calculating the risk of illliquid assets that only trade infrequently. That is not the case for large-cap equities in major financial centers. Trying to arbitrarily create a bigger data sample than one actually has creates the illusion of information, after all, this is the same daily information multiplied. In other words, one does not get real multi-day periods, instead the same daily data repeated. The problem of overlapping periods is well understood, see for example Hansen and Hodrick (1980).

Our results confirm this, the use of overlapping periods introduces more bias than otherwise would be the case, but more importantly the uncertainty of the risk forecasts increases significantly, in some cases to the extent that the risk forecast becomes statistically indistinguishable from zero.

Stress adjustment

Finally, the Committee proposes the introduction of a stressed ES. While the proposals are scant on details, apparently the current ES for capital purposes will be set equal to the historically highest ES (over a year), adjusted by the ratio of the reduced set of risk factors over the same highest year to the factors on the current date.

This is an important step towards more robust risk assessments. However, most risk forecast models tend to over-forecast risk in the most stressed time periods, and therefore are biased. The more nuanced approach of Boucher et. al. (2013), whereby risk forecasts are robustly adjusted by their historical performance may provide a better approach.

Furthermore, a bank could eliminate all estimated tail risk simply by perfectly hedging its core factor risk so that the ES was multiplied by zero. Even if that is not possible, the proposed implementation appears to leave significant scope for risk measure optimization by means of creative financial engineering.

Conclusion

The Basel Committee, in its most recent proposals, directly addresses the failures of risk methodologies before the crisis, aiming to deliver a robust regulatory framework.

Many of the proposals are sensible steps towards better financial regulations. However, we feel that they could have gone further in addressing the failures of risk management regulation prior to the crisis. For example, the proposals still treat financial risk as exogenous, in the sense of Danielsson, Shin and Zigrand (2009), implying they only capture risk after it becomes visible to the markets rather than when it builds up out of sight, and not capturing the vicious feedback loops that are at the core of financial crises.

Furthermore, a detailed examination reveals that the move towards 97.5% ES away from 99% VaR whilst using overlapping holding periods is likely to result in biased risk forecasts with significantly higher uncertainty than the existing framework. By contrast, the move towards stressed ES is welcomed, but the focus on historically worst scenarios is likely to lead to biased forecasts, and the specific implementation has the potential to lead to significant risk measure optimization.

Bibliography

Basel Committee on Banking Supervision (2012), "Fundamental review of the trading book".

Basel Committee on Banking Supervision (2013), "Fundamental review of the trading book: A revised market risk framework".

Boucher C, J Danielsson, P Kouontchou and B Maillet, "Risk Model-at-Risk", mimeo, London School of Economics.

Danielsson, J C (2013). “An evaluation of Basel III VaR and ES probabilities”, posted on www.modelsandrisk.org/basel.

Danielsson J, C de Vries, B Jorgensen, S Mandira, and G Samorodnitsky (2012), “Fat Tails, VaR and Subadditivity", *Journal of Econometrics*.

Danielsson J, H S Shin and J-P Zigrand (2009), “Modelling financial turmoil through endogenous risk” VoxEU.org, 11 March.

Hansen, L and R Hodrick (1980) “Forward Exchange Rates as Optimal Predictors of Future Spot Rates”, *Journal of Political Economy*, 88, 829-853.

Yamai, Yasuhiro and Toshinao Yoshiba (2002), "On the Validity of Value-at-Risk: Comparative Analyses with Expected Shortfall", *Monetary and Economic Studies*, Bank of Japan, January 2002.