2008/12/23

Calculated Risks

In my professional life I work in a financial institution on systems designed to measure financial risk. This last year --the last 3 months more-so-- has therefore been a very interesting one, as a lesson in the difference between theory and practice. Now, I should begin by saying that it is something of an open secret in the business that our risk models aren't very realistic. Almost everyone I've ever spoken to has an intuitive sense (if nothing more) that the real world doesn't work the way the model says it does. Not everyone is bothered by that fact, but almost everyone has some sense of it.

In finance, the risk on a portfolio of holdings (a mix of cash, stocks, bonds, derivatives) is measured by repricing it under different scenarios. Each scenario varies the price of some or all of the factors that affect the value of the various holdings. After a number of such scenarios have been priced, the range of results are ordered by the size of the gain or loss in value of the total portfolio. Somewhere toward the bottom of this list we say: Aha! This is our risk! Exactly how far down the list we go varies with the purpose of the measurement but the principle is always the same: the Nth worst result is what we consider to be our 'realistic' risk in the given context.

Several points worth commenting on emerge from this method. I'll stick to the most obvious: that the measurement cannot possibly be any better than the set of scenarios on which it is based. Because of this, coming up with a good scenario set is a huge part of the challenge of risk measurement. One relatively simple method of building scenarios is simply to replicate the variations in prices actually observed in the market place in the past, for a given 'window' period. A more complicated approach measures the relationships (or correlations) between all the various factors at work, and then spins a complex web of outcomes into a set of scenarios. Both these approaches rely on past observed relationships between factors, either implicitly (by replicating the results of these relationships) or explicitly by using them as input to the 'spinning machine' that creates the scenario set.

The problem with this is a variation on the old statistical saw that 'correlation is not causation'. Basically, the fact that two stock prices (for example) are correlated in a given period, doesn't mean that either of the two companies behind those stocks are necessarily dependent on the other, or that the observed correlation will hold up under duress. A fairly easy illustration is that in a strong bull market, stocks rise in a way that tends to create an observable, statistically significant positive correlation between many disparate stocks. The effect is stronger within an economic sector. When the economy sours, well, lets just say those correlations are quickly revealed to be overstated.

All this means that in times of stress, when financial risk is greatest, our risk models are at their most unreliable. Now how's that for an encouraging thought? You ask what can be done? The problem is that generating these scenarios is a fiendishly complex process as it is, with tens of thousands of variables. Adding more exponentially increases the complexity and quite possibly* doesn't substantially improve the predictive power of the models. So what is to be done? Perhaps we just need to pay a little more heed to that intuition I mentioned at the outset. Perhaps if all our reports contained that little line of boilerplate so common on other financial documents: "past results are no guarantee of future performance" we might not be in quite as bad a situation as we appear to be. Mea culpa.

_________________________
*I hope to continue this thought later.

No comments: