Risk can be classified into what is predicted by models — perceived risk — and the the fundamental — underlying risk, actual risk.
The classification of risk into perceived and actual comes from Endogenous extreme events and the dual role of prices.
The following picture, where the x-axis shows time and the y-axes prices, shows a hypothetical asset price bubble in blue:
Now, run one of your friendly risk forecast models, perhaps one of these, and show the resulting risk forecast, labeled perceived risk shown in red. Note that as we ride the bubble, perceived risk falls. It is as if we get ever higher returns at ever lower risk — Its like Money for Nothing.
Then, the bubble bursts and all of a sudden perceived risk shoots up. Why? Well it depends on the model, but suppose one is using one of the volatility models from here. Then, inevitably perceived risk will increase.
The reason is that a volatility model will react to events in recent history, and if prices change a lot, so inevitably will the risk forecast. This is why one can have very low risk forecasts until for the day after a crisis event. Consider the simple EWMA, where is returns on day and is the volatility forecast for the next day:
Then VaR is
where is the inverse normal and the probability, assuming a portfolio value is one.
A good example of this is the Swiss FX shock discussion, where the models failed to pick up the probability of the event, and then went crazy after the event.
However, the actual underlaying risk in the system, what we call actual risk, shown in green, increases along with the bubble, and falls when the bubble deflates.
Actual and perceived risk are usually negatively correlated
Since almost every risk forecast model is of perceived type, the end result is that:
Risk forecast models underestimate risk before a crisis and overestimate it after a crisis. The models are systematically wrong in all states of the world
This has particular implications for financial regulations and macro pru, as discussed here elsewhere.