Site icon Bank Underground

Wir sind die Roboter: can we predict financial crises?

Kristina Bluwstein, Marcus Buckmann, Andreas Joseph, Miao Kang, Sujit Kapadia and Özgür Şimşek

Financial crises are recurrent events in economic history. But they are as rare as a Kraftwerk album, making their prediction challenging. In a recent study, we apply robots — in the form of machine learning — to a long-run dataset spanning 140 years, 17 countries and almost 50 crises, successfully predicting almost all crises up to two years ahead. We identify the key economic drivers of our models using Shapley values. The most important predictors are credit growth and the yield curve slope, both domestically and globally. A flat or inverted yield curve is of most concern when interest rates are low and credit growth is high. In such zones of heightened crisis vulnerability, it may be valuable to deploy macroprudential policies.

History and the costs of financial crises

In the history of modern capitalism, financial crises — roughly defined as periods of severe distress and widespread failures in the financial system with significant macroeconomic costs — have been a recurrent theme. For example, the UK experienced regular crises throughout much of the 19th century. But the list of notable crises and crashes is long, encompassing the Tulip, Mississippi and South Sea bubbles, all accompanied by their own stories of manias.

The economic and social costs of these events are often staggeringly high. The estimated average cost of a financial crisis is around 75% of GDP, equivalent to £21,000 for every person in the UK in current terms. Unemployment increases by seven percentage points, house prices drop by 35%, and life expectancy declines by seven months in the years after the crisis.

Given the frequency and cost of financial crises, decision makers in governments and central banks want to avoid crises altogether or at least mitigate their destructive consequences. Predicting financial crises is hard, however. Even people as smart as lsaac Newton lost their fortunes in such events. So, can we accurately predict crises? And if so, what are the most important predictors and how reliable are they? We try to answer these questions using machine learning on a long-run dataset stretching back to the 1870s and including 17 developed countries covering most of world output over that period.

Why machine learning?

While history and economics goes in circles, technology goes uphill. In this sense, machine learning, a set of data-driven prediction techniques, has achieved some impressive feats in recent years. Facebook’s “DeepFace” is as accurate as humans in face recognition. A computer is better in identifying skin cancer from pictures than the average dermatologist. And Google’s AlphaGo beats the world’s strongest player in the highly complex Chinese board game of Go. Machine learning has also been successful in some economic prediction problems such as forecasting recessions and bond risk premia.

When is machine learning successful? Machine learning methods thrive in situations where many different factors play a role, and the relation between these factors and those which we want to predict — e.g. who is in this picture, is it cancer or not — is complex. We are in this kind of situation when trying to predict whether or not there is likely to be a financial crisis in the next year or two.

Machine learning horse race for financial crisis prediction

There are many potential machine learning models that might help to predict crises. We perform a horse race across some popular models, including random forests, support vector machines (SVM) and artificial neural networks among others. We aim to predict financial crises one or two years in advance. This would give policy makers time to react — for example by activating macroprudential policies should a model predict a high chance of a crisis.

Chart 1 shows the performance score for all models in the horse race, as captured by the area under the receiver operating characteristic, a standard metric for binary prediction problems. A model that perfectly distinguishes crisis and non-crisis observations on unseen data gets a score of one, while a model with predictions no better than a coin toss gets a score of 0.5 (the worst possible score). Traditionally, logistic regression is used for predicting crises and is our reference point. The machine learning models, except the decision tree, all outperform the logistic model.

Chart 1: Test performance of the different prediction models

The difference in performance between our best model, extremely randomised trees (a type of random forest), and logistic regression may seem relatively small but it is economically significant. To see this, we calibrate both models to ensure they correctly identify 80% of crises, i.e. we set the proportion of crises we aim to predict correctly. Then, we can compare the false alarm rate, i.e. the proportion of times when the model signals a crisis which doesn’t subsequently happen, as a measure of the cost of unnecessary policy interventions. This false alarm rate is 19% for extremely randomised trees compared to 32% for the logistic model, a substantial reduction.           

Opening the black box: which variables matter for crisis prediction?

Machine learning models are often called “black boxes” because it is hard to understand which variables drive a model’s predictions. This conflicts with the need of decision makers to understand the key economic factors which might be related to the build-up of crises and explain any policy decisions with reference to those. Recent developments like the Shapley value framework help to tackle this issue by assigning a well-defined contribution of how much each variable contributes to a model’s predictions. Concretely, the predicted crisis probability can be decomposed into contributions coming from individual variables, as measured by their Shapley values. These values can then be used to rank variables according to their importance.

Chart 2 shows the mean absolute Shapley values across all predictions for all of the above models apart from the decision tree. The machine learning methods and logistic regression consistently identify the same main predictors for financial crises: domestic and global credit growth, and the domestic and global slope of the yield curve, where the latter refers to cases where the cost of short-term borrowing is relatively high compared to the cost of long-term borrowing.

Chart 2: Model explanations using Shapley decompositions

Other work has previously established that strong credit growth, both domestically and globally, is an important predictor of financial crises.  Our results on the importance of the yield curve are more novel. The yield curve is a well-established leading indicator for recessions. But we find it to be of independent importance in predicting financial crises.

The strong predictive power of our machine learning models may partially stem from the simple and intuitive nonlinear relationships and interactions that they uncover. These help to identify zones of particular vulnerability to future financial crises. For example, we find that crisis probability increases materially at high levels of global credit growth but this variable has nearly no effect at low or medium levels. Similarly, interactions are important — particularly between global and domestic variables. For example, many crises occur in the face of strong domestic credit growth and a globally at or inverted yield curve. We also find that a flat or inverted yield curve domestically is more concerning when nominal interest rates are at low levels. In such environments, financial market players may take on excessive risk in a ‘search-for-yield’ to try to boost their returns — in accordance with some descriptions of financial manias and crashes.

What does our best model say for the UK?

Chart 3 shows the predicted likelihood of a crisis and the factors driving our best machine learning model since 1980. Vertical red shaded bars indicate the observations we would like to predict, one or two years ahead of an actual crisis (grey shaded bars). The black circles indicate the model prediction which is decomposed into the contributions from its four most important variables (Shapley values) from Chart 2.

Chart 3: Decomposition of the machine learning model prediction for the likelihood of financial crises

The model correctly predicts the small banks crisis in 1991 and the Global Financial Crisis 2007–08. But the driving factors of the predictions differ. The former is driven by rapid domestic credit growth and an inverted domestic and global yield curve, whereas the latter crisis is mostly predicted by global credit growth. This shows that the model is flexible enough to account for different types of crises, which is crucial given the long time span our analysis covers and the changes the global economic system underwent during this period.

The way forward

While our analysis does not necessary say the above factors cause financial crises, it does highlight that they make a country more vulnerable to financial crises. There will always be inherently unforeseeable events, such as the economic fallout caused by Covid-19, which remain very challenging for any model to predict. But identifying a financial system as more vulnerable and therefore more likely to amplify such an unexpected shock into a fully-fledged financial crisis is crucial given the enormous economic, political, and social consequences that financial crises entail.

Kristina Bluwstein works in the Bank’s Macroprudential Strategy and Support Division, Marcus Buckmann and Andreas Joseph work in the Bank’s Advanced Analytics Division, Miao Kang works in the Bank’s Data And Statistics Division, Sujit Kapadia works for the European Central Bank and Özgür Şimşek works at the University of Bath.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Exit mobile version