In early October, I was invited to speak at Second Curve Capital’s 2014 Financial Services CEO Retreat, hosted by Tom Brown. It was a “choose your own topic” type of presentation, so I went with “Time Travel in the Minsky Moment”. I’ll attempt to recap that presentation over the course of a series of three posts.

If you’re not familiar with Hyman Minsky, a quick bio sums that he was an American economist, a professor of economics, and a distinguished scholar. His work has attracted much more interest since his death than it did in his lifetime, and is now seen as especially important for our understanding of the financial crisis. He took Keynes work on depressions and the macroeconomy and extended it, focusing on the etiology (it’s a good word, look it up) of financial crises. My friend Paul McCulley coined the phrase “Minsky Moment”, referring to the sudden collapse of asset prices.

Minsky’s Financial Instability Hypothesis states that financial stability creates instability. So, the longer a financial system (i.e., our economy) moves along with relatively low volatility and rising asset prices and earnings, investor confidence that these conditions will continue also rises. This confidence leads to more borrowing, creating an overleveraged economy and ultimately resulting in a collapse. What follows is the “Reverse Minsky Journey” when the situation reverses and the economy works on deleveraging. That is the core idea. The modeling of this has been refined and improved by another friend, John Geanakoplos at Yale in his work on the Leverage Cycle.

Since the financial crisis, there has been an almost obsessive focus on managing risk, whether that be at the level of banks, or portfolios, or the financial system in general, or even for the global economy as a whole. I think much of this effort is misdirected and confused. Risk management is clearly important and needs careful attention. But, there is a great deal of difference between managing risk and dealing with what Keynes called “irreducible uncertainty.”

The distinction between risk and uncertainty was delineated by Frank Knight in 1921 in his book Risk, Uncertainty, and Profit. Risk is when we don’t know the outcome, but we do know the probability distribution of the outcome. This is what insurance companies do. They know, for example, roughly how many people will be hit by lightning or choke on a piece of meat. That is, they know the chances of the event happening. Uncertainty, though, is when we don’t know the outcome and we also do not know the underlying distribution. So we don’t know the chances of the event happening. I once heard George Soros say he predicted the financial crisis of 2008, then adding he had also predicted it many times and many years before when it did not happen. With uncertainty, we just don’t know.

While there are many issues with confusing risk and uncertainty, here are two. The widespread adoption of “portfolio insurance” in the 1980’s contributed to the Crash of 1987, as it required one to “insure” or “de-risk” a portfolio by selling increasing amounts of stocks the more they went down. The idea here was by selling, you were reducing your exposure to a “risky” asset.

The only thing this insured, as Keynes pointed out to his board in 1937 when they urged him to sell more stocks as the market was falling, was that he would own no stocks at the bottom of the market. To the extent that large amounts of money are managed this way, it is a recipe for a meltdown. This is well covered in Rick Bookstaber’s A Demon of Our Own Design. It is still a feature of many risk management models today to require reducing exposure to an asset falling in price. It sure looks like that was at work in the very fast and short lived drop in the stock market in September and October 2014.

The second issue with confusing risk and uncertainty is the paradox of risk management. To the extent that financial instruments are created and employed to help manage risk, the perception is also created that the risk in question is now managed and presumably controlled within certain parameters. That, in turn, means more risk can be taken on given that it can be managed and controlled. So the more risk management tools we have, the more risks we and the system take on. Risk management tools create risk!

The weakness in such tools can be seen in the broad adoption of so called Value at Risk models in financial institutions. These models attempt to quantify the risk of loss with a 95% probability (i.e. two standard deviations). This is great for what happens 95% of the time (assuming the models are accurate to that degree, which they are not, being models not of risk but of uncertainty). But what we really care about, both as investors and for the system as a whole, is how much damage is going to be inflicted in that 5% tail of the distribution.

Having been taken to the woodshed myself for failing to get that right in 2008, it is topic of keen interest. More to come on that later.

Read the series:
Part II: The Shifting Paradigm
Part III: “We Have Involved Ourselves in a Colossal Muddle…”

For further reading: Risk, Uncertainty, and Profit by Frank Knight