Regulations change behaviour and outcomes. It is seductively attractive to say that someone misbehaves, therefore we need the rule to prevent the misbehaviour. However, human beings, being human, don’t just comply, their behaviour changes. That is why regulating the financial system is infinitely more complex than engineering.
Suppose an engineer designs a bridge. She takes on board the laws of nature, the likelihood of earthquakes and severe weather and figures out with reasonable accuracy the various cost-benefit scenarios presented to those commissioning the bridge. If she makes the supporting pillars one metre thick, the likelihood of the bridge collapsing is once every 400 years.
What about a bank? If the quantity of concrete in a bridge is directly related to the stability of the bridge, then surely the amount of capital a bank holds is also directly related to its stability. That gives us a simple cost-benefit analysis.
The more capital a bank holds, the safer it is, but at the expense of the amount of interest it needs to charge for loans.
So, there is a trade-off between financial stability and economic growth. All the policymakers have to do is to find right balance and all is good. There is a large number of impact studies aiming to do exactly that. The problem is that those from the industry minimise the safety angle and maximise the economic impact while those from the government agencies do the opposite. We can pick and choose the cost-benefit of bank capital simply based on our political or ideological or financial interests, with minimal reference to actual reliable facts. It is a post-truth approach to financial policy.
This is quite different from that of the cost-benefit study for our bridge. When the engineer designs a bridge, she can reliably consider the laws of nature and the environment as exogenous because nature is neutral. In finance, nature is not neutral, it is malevolent and the risk is endogenous.
Immediately after our regulator comes up with rules for determining bank capital, the bankers look for a way around the rules. The will try their best to make capital look very high as far as the outside world is concerned while actually keeping it really low.
The technical name for this is capital structure arbitrage. Many of the banks that failed in 2008 had some of the highest levels of capital going, but that capital turned out to be illusionary. Of course, the regulators now say that they learned their lesson, and today’s capital ratios are much more reliable than they were before 2007.
There is always a cat-and-mouse game going on between the authorities and the banks. While the bankers may verily read regulations in order to comply, they are much more enthusiastic about finding loopholes. Regulations are inherently backward looking and change very slowly, giving fast-moving and forward-looking bankers every incentive to look for risk-taking where the authorities are not looking. Because the financial system is almost infinitely complex, it is a technical impossibility to regulate anything but a very small part of it, leaving plenty of room for misbehaviour. The bankers know this and take advantage. They spend considerable resources trying to find loopholes because when they find them they have one or two decades to profitably exploit them. That creates at least three problems.
First, regulations may move profitable and risky activities to the shadow banking system or abroad.
The second problem is that when we regulate the financial system, we often just drive risk-taking behaviour away from the spotlight into the shadows, where it is much harder to detect.
I can think of more than one financial crisis with that as the main cause, for example the tequila crisis in Mexico in 1993. This happened because the Mexican banks were borrowing in New York in US dollars to lend to Mexican borrowers in pesos. Because the currency risk was taken by the Mexican banks, the authorities were justifiably concerned and forbade the banks from borrowing in New York to lend to Mexico. The Mexican banks found it easy to get around the rules by creating derivative transactions with the New York banks. Because the Banco de Mexico did not see these transactions, it did not realise what was happening. It is much better if risk-taking is visible rather than hidden.
The third, and even more insidious problem, is that precisely because the financial authority is trying to protect us from the financial system, banks have an extra incentive to take on more risk. And even worse, take risk in a way that maximises the chance of a bailout if things go wrong. This is an example of Minsky’s dictum “stability is destabilising”. If the central banks are successful in reducing risk, they perversely create incentives for risk-taking that eventually leads to a crisis.
This is an example of endogenous risk. The reason is because risk is hard to measure, and all the riskometers can pick up is perceived risk. If the road is smooth, and perceived risk is low, it creates incentives to take more risk. Because after all, if everything is safe, what is wrong with a little bit more risk? The problem is that such risk taking is not immediately visible, but only seen much later. It was decisions taken in the supposedly low risk environment at the start of the 2000s that created the conditions for the subsequent crisis. From a statistical point of view, it is usually impossible to detect such hidden buildup of risk.
More often than not, these latter two problems come together, amplifying each other. Because perceived risk is low, we take more risk, but because we are not supposed to, we do it away from the spotlight. We therefore don’t know how much actual risk is building up, further encouraging more risk-taking in the shadows.
© All rights reserved, Jon Danielsson, 2019