Adam Brinley Codd and Andrew Gimber

Meteorologists and insurers talk about the “1-in-100 year storm”. Should regulators do the same for financial crises? In this post, we argue that false confidence in people’s ability to calculate probabilities of rare events might end up worsening the crises regulators are trying to prevent.
Sir Paul Tucker recently said he thought the public would want regulators to make sure financial crises like the one in 2008–2009 happened less than once every 75 years. And an approach called GDP (or growth) at risk – which has featured prominently in the IMF’s financial stability assessments since October 2017 – typically focuses on the 5th percentile, i.e. a 1-in-20 bad outcome for GDP growth, as a way of measuring downside economic risks.
Framing risks in terms of 1-in-X outcomes might be helpful as a communication tool, but it comes with risks of its own. That’s because unwarranted faith in the odds of rare events can set the stage for far worse outcomes. No matter how hard we try, we cannot accurately quantify the chances of rare events. The reasons can be summed up in two parts: can’t know and don’t know.
Can’t know
Putting odds on future events is inherently difficult. It’s obvious that we can’t know what will happen – nobody can predict the future. But we can’t even know everything that might happen. All we can be sure of is that there are things that will happen in the future that we cannot even conceive of today. And since there are possible future events that we can’t even list out, we can’t assign probabilities to them. This simple fact also means our estimates of the chances of events we can list out are likely to be wrong: a person who has never experienced snow will overestimate the chances of rain on a very cold day.
Even if we can imagine the event, it can be a fool’s errand to try to predict its impact. History is full of examples of discoveries and inventions whose impact was barely conceivable. Heinrich Hertz famously said his discovery of radio waves was “of no use whatsoever”. Even if a few visions of the future end up looking eerily prescient (such as E. M. Forster’s 1909 short story that foreshadowed the darker side of the internet age), most will probably be as wide of the mark as predictions of robot barbers or nuclear-powered vacuum cleaners.
The relevance for financial regulators is that financial crises are rare and unexpected events. Despite regulators’ efforts to make banks more resilient and reduce the chance of a crisis, the risk will never be eliminated. And although most people know a financial crisis when they see one, when stability does break down the trigger is likely to be something that very few people had identified or expected.
Don’t know
Even if we know what can happen, outside of casino games and lotteries it is nearly impossible to get the odds of things happening right. And the less likely an event is, the harder it becomes to say exactly how unlikely it is. Human behaviour is notoriously difficult to predict, especially under extreme circumstances.
Even if we forget about trying to work out the chances of specific events, and instead look at aggregate metrics like GDP growth or oil prices, we still can’t do this with much accuracy. The more you aggregate, the less data you have. To get precise estimates of the severity of rare events, you need lots of data points. Yet we only have reliable annual GDP growth figures for 100 years (100 data points), and reliable quarterly GDP growth figures for 40 years (160 data points).
Even where we have lots of data, it is still hard to work out the odds of extreme events. Often, our estimates can be very sensitive to individual data points in our sample.
In the 31 years to August 2001, there were 109 fatal terrorist attacks in the US. If US antiterrorism authorities focused on the 95th percentile, they would be focused on attacks that killed 3 people. Extend the sample window forward by one month, to include 9/11, and the 95th percentile jumps to 13 deaths – still a tiny fraction of the actual number of deaths on that awful day.
We know that our estimates will always be uncertain, but does this matter if they are the best we can do? Surely it’s better than having no estimates at all?
The land of black swans
Things don’t immediately go wrong when people estimate the odds of unlikely events. They go wrong when people place unwarranted faith – or undue emphasis – on those estimates. Doing this leaves them poorly prepared for events which they erroneously thought were too unlikely to worry about – one of Nicholas Taleb’s infamous black swans – which sets the stage for far worse outcomes.
The invention of the Black–Scholes option pricing model in the 1970s led to a boom in the sale of so-called portfolio insurance. This aimed to limit downside risk by buying put options, or automatically shorting index futures when the stock market fell. The Black–Scholes model, it was thought, allowed broker-dealers to accurately price these options and to hedge them with offsetting trades. However, the model’s assumptions on return distributions and arbitrage conditions proved wrong. On Black Monday in 1987, the Dow Jones Industrial Average fell by 22.6% – an event the Black–Scholes model implied was very nearly impossible. The SEC report into the crash found a significant role for the fire-sale dynamics unleashed by portfolio insurance. Similar dynamics seem to have been at play in the Japanese “VaR shock” episode in 2003, when a spike in volatility induced banks using value-at-risk models to sell off Japanese government bonds at the same time. Without widespread reliance on the Black–Scholes and VaR models, it seems unlikely that there would have been the same degree of correlated selling in these episodes.
False comfort from pricing models estimated on past data struck again during the subprime mortgage crisis. In August 2007, Goldman Sachs’ then CFO was quoted as saying “We were seeing things that were 25-standard deviation moves, several days in a row” – about as likely as winning the lottery dozens of times, week after week.
Macroprudential policymakers around the world are starting to look at a framework called GDP at risk (also known as growth at risk) to help them gauge the tail risks to the economy. It’s not designed to identify specific near-term risks such as bank failures or cyber attacks. But it’s one useful way of thinking about the dangers posed to future prosperity by rapid credit growth and other variables that have historically been associated with crises. GDP at risk provides a way of adding up those vulnerabilities and tracking their evolution over time.
Still, policymakers should take little comfort from any particular estimate of the 5% tail of the GDP growth distribution – all this says is that, according to the model, there’s a 1-in-20 chance of GDP growth being that bad or worse. Even if the past were a reliable guide to the future, we just don’t have enough historical data to get a precise estimate of how much worse things are likely to be beyond that point.
Where does this leave us?
In search of true, but imperfect, knowledge
The odds people calculate are themselves highly uncertain. But what can be done about this meta-uncertainty?
Policymakers could avoid talking about probabilities altogether. Instead of a 1-in-X event, the Bank of England’s Annual Cyclical Scenario is described as a “coherent ‘tail risk’ scenario”.
Policymakers could avoid some of the cognitive biases that afflict people’s thinking about low-probability events, by rephrasing low-probability events in terms of less extreme numbers. A “100-year” flood has a 1% chance of happening in any given year, but anyone who lives into their 70s is more likely than not to see one in their lifetime.
Policymakers could be vocal about the fact that there are worse outcomes beyond the 1-in-X point of the distribution. Although it’s subject to the same “can’t know” and “don’t know” problems, expected shortfall estimates the average bad outcome in a 1-in-X tail of the distribution, rather than the least-bad one. And policymakers can supplement risk assessments based on limited historical data with additional “extreme but plausible” scenarios, as happens for stress testing of central counterparties.
Policymakers should learn as much as they can from past events and do the best they can to assess the risk environment with the available historical data. But there are limits to how reliably they can estimate the chances of an extreme event. Nobody can know all the possible future events that policymakers should be concerned about. Even with a narrow focus on the things that can be measured, there isn’t much data with which to estimate the tails of the distribution – the unlikely but potentially disastrous events that macroprudential authorities are supposed to deal with.
Adam Brinley Codd works in the Bank’s Stress Testing Strategy Division and Andrew Gimber works in the Bank’s Macroprudential Strategy and Support Division.
If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.
Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.
This is a great article with important insights.
The excessive bank exposures that, given a “rare event”, might become risky and threaten financial stability, are never created with what’s perceived as risky, always with what’s perceived as safe.
Below conditional probability “P” explaining one of the fundamental flaws of current risk weighted bank capital requirements:
P (Buildup of large exposures dangerous to bank system / Below BB-ratings) = None
P (Buildup of large exposures dangerous to bank system / AAA ratings) = Big
Basel II’s bank capital requirement’s risk weights: Below BB- =150%; AAA = 20%
Need we say more?
Very good article and timely. These Black Swan events will happen, simply go back in history 200 years.
You can try to eliminate or reduce but, they will happen.
I strongly suspect we shall see a major financial event, worse than 2008/9, within the next 1 to 3 years, which will cause much damage.