One can endlessly criticize risk models, but that is just too nihilistic. So, what are the good for? There are three camps, the model believers, the rejectionists and the healthy skeptics. I’m going to make the case for the last below.
It is easy to criticize risk models, depressingly easy. I certainly have been guilty of that like many others.
However, taking their accuracy as given, how should they be used in real applications?
Lets classify them in four categories
Risk models were really designed for managing day-to-day risk, say on the trading floor.
Suppose you got Ann, Bill, and Jo trading the same stuff, and all of a sudden Ann’s VaR relative to Bill and Jo, shoots up. Well then, you know that something is afoot. Perhaps Ann is taking too much risk or Bill and Jo too little, or perhaps the model messed up.
Regardless, it is a useful signal, and the intelligent risk manager will treated it that way. Of course, if the risk manager is the tick-the-box type (to be discussed later) then all bets are off, but let’s discard that.
What this means is how a financial institution allocates capital between asset classes; Does it want to decrease its exposure to European small caps and increase its holding of US junk bonds? Then the question is about the allocation of capital meant to be used for risk taking.
Here, the risk models become much more useless, well illustrated by the Swiss FX shock . The risk model gives you signal about day-to-day risk but misses the big events caused by other factors such as the macroeconomy, the financial system and the like.
Of course, the risk models can help, but should only be one of the many other decision factors.
So, to conclude, risk models are mostly useless for internal risk capital allocation
Micro is a broad church, and risk modeling has little to say about its bread-and-butter issues such as conduct.
And, even if much micro is about risk, it is not of the statistical modeling variety. OK, before anyone objects, not of the VaR and ES type discussed here.
Statistical risk modeling can certainly be of help, if only to ensure that financial institutions are properly managing day-to-day risk.
The danger is if the micro regulators start relying on statistical risk models as a signal of the health of the financial institution, or some such.
Also, regulations have a tendency to degenerate into the tick-the-box type, and statistical risk modeling is particularly subject to that.
So overall, risk models can be helpful for micro but can also be useless and even dangerous.
Here, it is all about extreme outcomes, extreme tail risk and systemic risk. Statistical risk models have little or nothing to say about such risk, as discussed here.
Statistical risk models can be outright dangerous for macro pru. If the authorities formulate policy and attempt to control the system based on such a highly inaccurate signal, they risk amplifying risk when they should reduce it, or curtail risk when risk is needed.
The cost of type I and type II error is significant.
That means that systematic risk forecast methods such as SES, MES, CoVaR, SRISK, Sharpley, and the like, not only do not capture the risk they purport to do, even worse, their usage increases systemic risk.
I discussed the reliability of such systematic risk measures in two papers Can we prove a bank guilty of creating systemic risk? A minority report and Model risk of risk models.
I think a lot of policymakers know this. Fortunately, many of the systemic risk forecast methods have been relegated to academia and low impact central bank research. There are some authorities more keen, like the ESRB, but it doesn’t matter much.