Marcus Buckmann, Galina Potjagailo and Philip Schnattinger
Understanding the origins of currently high inflation is a challenge, since the effects from a range of large shocks are layered on top of each other. The rise of UK service price inflation to up to 6.9% in April might potentially reflect external shocks propagating to a wider range of prices and into domestic price pressures. In this blog post we disentangle what might have contributed to the rise in service inflation in the UK using a neural network enhanced with some economic intuition. Our analysis suggests that much of the increase stems from spillovers from goods prices and input costs, a build-up of service inflation inertia and wage effects, and a pick-up in inflation expectations.
Data plays a central role in all technical aspects of insurance and actuarial work. However, utilisation is often still confined to aggregate premium and claims data. Not so in the case of telematics. Say the phrase ‘black box’ and most people will think of flight recorders fitted to aircraft. But Motor insurers also use the millions of data points generated by black boxes, fitted to more than a million cars in the UK, to price risks. What’s more Marine insurers are getting in on the act. In this post we take an actuarial vantage to explore the use of telematics data and consider whether insurers could be using this ‘gold mine’ of information even more widely.
Arthur Turrell, Eleni Kalamara, Chris Redl, George Kapetanios and Sujit Kapadia
Every day, journalists collate information about the world and, with nimble keystrokes, re-express it succinctly as newspaper copy. Events about the macroeconomy are no exception. So could there be additional valuable information about the economy contained in the news? In a recent research paper, we ask whether newspaper stories could help to predict future macroeconomic developments. We find that news can be used to enhance statistical economic forecasts of growth, inflation and unemployment — but only by using supervised machine learning techniques. We also find that the biggest forecast improvements occur when it matters most — during stressed periods.
Kristina Bluwstein, Marcus Buckmann, Andreas Joseph, Miao Kang, Sujit Kapadia and Özgür Şimşek
Financial crises are recurrent events in economic history. But they are as rare as a Kraftwerk album, making their prediction challenging. In a recent study, we apply robots — in the form of machine learning — to a long-run dataset spanning 140 years, 17 countries and almost 50 crises, successfully predicting almost all crises up to two years ahead. We identify the key economic drivers of our models using Shapley values. The most important predictors are credit growth and the yield curve slope, both domestically and globally. A flat or inverted yield curve is of most concern when interest rates are low and credit growth is high. In such zones of heightened crisis vulnerability, it may be valuable to deploy macroprudential policies.
The great American baseball sage, Yogi Berra, is thought to have once remarked: ‘It’s tough to make predictions, especially about the future’. That is certainly true, but thankfully the accelerating development and deployment of machine learning methodologies in recent years is making prediction easier and easier. That is good news for many sectors and activities, including microprudential regulation. In this post, we show how machine learning can be applied to help regulators. In particular, we outline our recent research that develops an early warning system of bank distress, demonstrating the improved performance of machine learning techniques relative to traditional approaches.
Machine learning models are at the forefront of current advances in artificial intelligence (AI) and automation. However, they are routinely, and rightly, criticised for being black boxes. In this post, I present a novel approach to evaluate machine learning models similar to a linear regression – one of the most transparent and widely used modelling techniques. The framework rests on an analogy between game theory and statistical models. A machine learning model is rewritten as a regression model using its Shapley values, a payoff concept for cooperative games. The model output can then be conveniently communicated, eg using a standard regression table. This strengthens the case for the use of machine learning to inform decisions where accuracy and transparency are crucial.
Jeremy Franklin, Scott Woldum, Oliver Wood and Alex Parsons
How do markets react to the release of economic data? We use a set of machine learning and statistical algorithms to try to find out. In the period since the EU referendum, we find that UK data outturns have generally been more positive than market expectations immediately prior to their release. At the same time, the responsiveness of market interest rates to those data surprises fell below historic averages. The sensitivity of market rates has also been below historic averages in the US and Euro area, suggesting international factors may also have played a role. But there are some signs that the sensitivity has increased over the past year in the UK.
Rapid advances in analytical modelling and information processing capabilities, particularly in machine learning (ML) and artificial intelligence (AI), combined with ever more granular data are currently transforming many aspects of everyday life and work. In this blog post we give a brief overview of basic concepts of ML and potential applications at central banks based on our research. We demonstrate how an artificial neural network (NN) can be used for inflation forecasting which lies at the heart of modern central banking. We show how its structure can help to understand model reactions. The NN generally outperforms more conventional models. However, it struggles to cope with the unseen post-crises situation which highlights the care needed when considering new modelling approaches.