With the recent hedging loss at JP Morgan swelling to the billions, the idea of Value-at-Risk is again in the spotlight. As with any risk measure, there are pros and cons. Unless one has a clear understanding of what is and isn’t being measured and the assumptions being made, VaR can misrepresent the risks of an investment.
The first thing to understand regarding VaR is there isn’t a single number for VaR. Different assumptions about the distribution model, the “worst-case” break-points and other factors can lead to wildly divergent numbers for VaR. The second issue/factor to understand is how VaR has evolved from an ultra-short-term snapshot of daily trading positions to a metric used in the analysis of long-term investments like mutual funds and portfolios. We address the assumptions and evolution of VaR in this first ZephyrCOVE post and then talk about how VaR is being used for manager or portfolio analysis in a follow-up post.
The original iterations of VaR were simple and straight-forward. In the early 1990s, JP Morgan applied a normal, Gaussian, bell-shaped curve over their trading positions in order to make estimates about the range of position values under normal conditions. In order to conduct this kind of analysis, certain assumptions need to be made, including:
VaR has evolved since its early days to address some of the above issues. Certain variations of VaR incorporate more sophisticated mathematical processes to accommodate the fact that the markets aren’t always distributed normally and have fatter tails or skewed distributions. The move away from the normal, bell-shaped distribution curve was meant to address points #1 and #2 above.
It is still up to the user to determine the cut-off point of what the limits are to “normal” markets, regardless of the shape of the distribution used (assumption #3 above). Typically the choices are 90%, 95%, or 99% of the time, but this discretionary call can result in very different values for VaR.
Moreover, VaR does not address what happens in the 10%, 5%, or 1% tails. There tends to be a misperception when people think of VaR as being the maximum possible loss. In reality, VaR is a breakpoint, and it is certainly possible to exceed that breakpoint. In order to address point #4, what happens in the tails, a newer metric called CVaR for Conditional Value-at-Risk has been gaining traction in the market. CVaR is a weighted average of the events that happen in the tail, beyond the chosen breakpoint.
However, none of these enhancements to VaR calculations have addressed points #5 and #6, which are arguably more important. The big, major assumption about any probability distribution is that the historical data used is representative of all future possibilities (point #5). In plain English terms, the assumption is that because it has not happened before, it won’t happen ever. This is a big leap of faith and is one of the main critiques of VaR.
Assumption #6 regarding the timing of returns is lesser-known but also important. In a probability distribution each observation is assumed to be independent of the others. For example, one could go through a VaR calculation and determine that the expected maximum one-day loss of a position at the 95% level would be $50 million. But that’s just for one-day. What if four of those days were to happen over the span of a very bad week? At the end of the trading week the loss is $200 million.
While VaR is useful, it is important to understand the assumptions and limitations of the metric. In a follow-up post I will discuss how VaR is being used in manager and portfolio analysis and some bigger-picture ideas about VaR.
Informa Investment Solutions is part of the Business Intelligence Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.