Jeremy Franklin, Scott Woldum, Oliver Wood and Alex Parsons
How do markets react to the release of economic data? We use a set of machine learning and statistical algorithms to try to find out. In the period since the EU referendum, we find that UK data outturns have generally been more positive than market expectations immediately prior to their release. At the same time, the responsiveness of market interest rates to those data surprises fell below historic averages. The sensitivity of market rates has also been below historic averages in the US and Euro area, suggesting international factors may also have played a role. But there are some signs that the sensitivity has increased over the past year in the UK.
Smartphone apps and newsfeeds are designed to constantly grab our attention. And research suggests we’re distracted nearly 50% of the time. Could this be weighing down on productivity? And why is the crisis of attention particularly concerning in the context of the rise of AI and the need, therefore, to cultivate distinctively human qualities?
Aidan Saggers and Chiranjit Chakraborty
Investment in the Financial Technology (FinTech) industry has increased rapidly post crisis and globalisation is apparent with many investors funding companies far from their own physical locations. From Crunchbase data we gathered all the venture capital investments in FinTech start-up firms from 2010 to 2014 and created network diagrams for each year.
Chiranjit Chakraborty and Andreas Joseph
Rapid advances in analytical modelling and information processing capabilities, particularly in machine learning (ML) and artificial intelligence (AI), combined with ever more granular data are currently transforming many aspects of everyday life and work. In this blog post we give a brief overview of basic concepts of ML and potential applications at central banks based on our research. We demonstrate how an artificial neural network (NN) can be used for inflation forecasting which lies at the heart of modern central banking. We show how its structure can help to understand model reactions. The NN generally outperforms more conventional models. However, it struggles to cope with the unseen post-crises situation which highlights the care needed when considering new modelling approaches.
Ambrogio Cesa-Bianchi , Chris Redl, Andrej Sokol and Gregory Thwaites
Volatile economic data or political events can lead to heightened uncertainty. This can then weigh on households’ and firms’ spending and investment decisions. We revisit the question of how uncertainty affects the UK economy, by constructing new measures of uncertainty and quantifying their effects on economic activity. We find that UK uncertainty depresses domestic activity only insofar as it is driven by developments overseas, and that other changes in uncertainty about the UK real economy have very little effect.
Marco Bardoscia, Paolo Barucca, Adam Brinley Codd and John Hill
The failure of Lehman Brothers on 15 September 2008 sent shockwaves around the world. But the losses at Lehman Brothers were only the start of the problem. The price of their bonds halved, almost overnight. Other institutions that held Lehman’s debt faced huge losses, and markets feared that those losses could trigger further failures. The good news is that our latest research suggests that risks within the UK banking system from one such contagion channel, “solvency contagion”, have declined sharply since 2008. We have developed a new model which quantifies risk from this channel, and helps us understand why it has fallen. Regulators are using the model to monitor this particular source of risk as part of the Bank’s annual concurrent stress test exercise.
Alex Haberis, Richard Harrison and Matt Waldron.
In textbook models of monetary policy, a promise to hold interest rates lower in the future has very powerful effects on economic activity and inflation today. This result relies on: a) a strong link between expected future policy rates and current activity; b) a belief that the policymaker will make good on the promise. We draw on analysis from our Staff Working Paper and show that there is a tension between (a) and (b) that creates a paradox: the stronger the expectations channel, the less likely it is that people will believe the promise in the first place. As a result, forward guidance promises in these models are much less powerful than standard analysis suggests.
Marek Raczko, Mo Wazzi and Wen Yan
Economists view the United Kingdom as a small-open economy. In economists’ jargon it means that the UK is susceptible to foreign shocks, but that UK shocks do not influence other countries. This definitely was not the case in 2016. The result of the EU referendum, even though it was a UK-specific policy event, had a global impact. Our analysis shows that the Brexit vote not only had a significant impact on UK bond and equity markets, but also spilled over significantly to other advanced economies. Moreover, this approach suggests that the initial Brexit-shock has only partially reversed and still remains a drag on global bond yields and equity prices, though there are wide error bands around that conclusion.
Olga Cielinska, Andreas Joseph, Ujwal Shreyas, John Tanner and Michalis Vasios
The Bank of England has now access to transaction-level data in over-the-counter derivatives (OTCD) markets which have been identified to lie at the centre of the Global Financial Crisis (GFC) 2007-2009. With tens of millions of daily transactions, these data catapult central banks and regulators into the realm of big data. In our recent Financial Stability Paper, we investigate the impact of the de-pegging in the euro-Swiss franc (EURCHF) market by the Swiss National Bank (SNB) in the morning of 15 January 2015. We reconstruct detailed trading and exposure networks between counterparties and show how these can be used to understand unprecedented intraday price movements, changing liquidity conditions and increased levels of market fragmentation over a longer period.
Sinem Hacioglu Hoke and Kerem Tuzcuoglu
We economists want to have our cake and eat it. We have far more data series at our disposal now than ever before. But using all of them in regressions would lead to wild “over-fitting” – finding random correlations in the data rather than explaining the true underlying relationships. Researchers using large data sets have historically experienced this dilemma – you can either throw away some of the information and retain clean, interpretable models; or keep most of the information but lose interpretability. This trade-off is particularly frustrating in a policy environment where understanding the identified relationships is crucial. However, in a recent working paper we show how to sidestep this trade-off by estimating a factor model with intuitive results.