We have updated our Terms of Use.
Please read our new Privacy Statement before continuing.

A farewell to VaRms

14 May 2012 By Peter Thal Larsen

JPMorgan may have dealt a fatal blow to Value-at-Risk. The tool for gauging banks’ trading exposures had already been wounded by the credit crunch. Now the U.S. bank’s $2 billion loss gives regulators a further incentive to kill it off for good.

VaR, which uses market movements to estimate the most a bank could lose on any given position in a single day, has become the industry standard for measuring trading risk. But it is deeply flawed. To begin with, VaR doesn’t fully capture the risk that credit instruments will default. It also fails to recognise that securities can suddenly become illiquid, as they did in the summer of 2007. Finally – and most crucially – VaR assumes that daily market movements occur in a normal distribution, where small movements are the norm and violent swings are rare. This makes VaR models vulnerable to so-called “fat tail” scenarios which lie outside the 99 percent of situations that they purport to capture.

These defects were on display in 2007 and 2008, when most big banks suffered losses that far exceeded the sums predicted by their VaR models. Even so, banks have continued to use VaR to measure and report trading risks. And even though regulators have pushed through some improvements, such as requiring banks to base their calculations on periods of past market turmoil, the models remain highly suspect.

Just look at JPMorgan. At the beginning of the year, the bank’s chief investment office, which manages a portfolio worth $361 billion, quietly introduced a new VaR model. Last month, JPMorgan said the average VaR for its CIO in the first quarter was $67 million – in line with the $60 million figure for the same period of 2011. But JPMorgan then reverted to the old model, and ran the calculation again. This time, the CIO’s average VaR was $129 million. Somehow, the new model had managed to cut JPMorgan’s reported risk in half.

Any measure that produces such diverging outcomes deserves to be binned. Fortunately, regulators have some alternatives in mind. Earlier this month, the Basel Committee on Banking Supervision proposed ditching VaR in favour of a measure called Expected Shortfall (ES) when calculating the capital banks have to hold against their securities portfolios.

ES attempts to estimate the size and likelihood of extreme losses. In this sense, ES is the opposite of VaR – it focuses on those large-but-remote risks that VaR does not capture. Switching to ES should therefore force banks to hold more capital – and become more risk-averse.

ES is far from flawless. To begin with, it involves more guesswork: by definition, there are far fewer examples of extreme losses on which to base a calculation. Second, it shares VaR’s dependence on historical experience. This could encourage traders to underestimate or deliberately ignore new risks.

However, the Basel Committee is also re-thinking the whole concept of relying on banks’ own risk models when it comes to calculating capital ratios. Under the existing rules, banks have two options when working out how much capital they need: they can plug their numbers into a model supplied by the regulators, or they can use home-grown models approved by supervisors. Most big banks opt for the latter. That’s not entirely surprising: though internal models are expensive to design and maintain, they tend to spit out lower capital requirements than the standardised alternative.

Scarred by the crisis, regulators want to close the gap between the two approaches. The Basel Committee’s proposal is that banks which use internal models will also have to run their numbers through the standardised alternative. If the internal model produces a much lower result, it should be rejected. Indeed, one suggestion is that the standard, regulator-supplied models should provide a floor for capital requirements.

This represents nothing less than a revolution in bank regulation. For most of the past two decades, regulators have assumed that big banks had better and more sophisticated measures of risk. That proved completely wrong. Forcing banks to make greater use of standardised models raises other potential problems – in particular, it could make the industry more homogenous and prone to making the same mistakes at the same time. But as JPMorgan has once again demonstrated, that could hardly be worse than relying on banks to measure their own risks.


Email a friend

Please complete the form below.

Required fields *


(Separate multiple email addresses with commas)