What do two million accounts tell us about the impact of Covid-19 on small businesses?

James Hurley, Sudipto Karmakar, Elena Markoska, Eryk Walczak and Danny Walker

Compass on old map

This post is the second of a series of posts about the Covid-19 pandemic and its impact on business activity.

Covid-19 led to a sharp reduction in economic activity in the UK. As the shock was playing out, small and medium-sized businesses (SMEs) were expected to be more exposed than larger businesses. But until now, we have not had the data to analyse the impact on SMEs. In a recent Staff Working Paper we use a new data set containing monthly information on the current accounts of two million UK SMEs. We show that the average SME saw a very large drop in turnover growth and that the crisis played out very differently for different types of SMEs. The youngest SMEs in consumer-facing sectors in Scotland and London were hit hardest.

Continue reading “What do two million accounts tell us about the impact of Covid-19 on small businesses?”

Why fragmentation of the global data supply chain poses risks to financial services

Matthew Osborne and David Bholat

Every minute of the day, Google returns over 3.5 million searches, Instagram users post nearly 50,000 photos, and Tinder matches about 7,000 times. We all produce and consume data, and financial firms are key contributors to this trend. Indeed, the global business models of many firms have amplified the data-intensity of the financial services industry. But potential fragmentation of the global data supply chain now poses a novel risk to financial services. In this blog post, we first discuss the importance of data flows for financial services, and then potential risks from blockages to these flows.

Continue reading “Why fragmentation of the global data supply chain poses risks to financial services”

Making big data work for economics

Arthur Turrell, Bradley Speigner, James Thurgood, Jyldyz Djumalieva, and David Copple

‘Big Data’ present big opportunities for understanding the economy.  They can be cheaper and more detailed than traditional data sources, and on scales undreamt of by survey designers.  But they can be challenging to use because they rarely adhere to the nice neat classifications used in surveys.  We faced just this challenge when trying to understand the relationship between the efficiency with which job vacancies are filled and output and productivity growth in the UK.  In this post, we describe how we analysed text from 15 million job adverts to glean insights into the UK labour market.

Continue reading “Making big data work for economics”

Using machine learning to understand the mix of jobs in the economy in real-time

Arthur Turrell, Bradley Speigner, James Thurgood, Jyldyz Djumalieva and David Copple

Recently, economists have been discussing, on the one hand, how artificial intelligence (AI) powered by machine learning might increase unemployment, and, on the other, how AI might create new jobs. Either way, the future of work is set to change. We show in recent research how unsupervised machine learning, driven by data, can capture changes in the type of work demanded.

Continue reading “Using machine learning to understand the mix of jobs in the economy in real-time”

Big Data jigsaws for Central Banks – the impact of the Swiss franc de-pegging

Olga Cielinska, Andreas Joseph, Ujwal Shreyas, John Tanner and Michalis Vasios

The Bank of England has now access to transaction-level data in over-the-counter derivatives (OTCD) markets which have been identified to lie at the centre of the Global Financial Crisis (GFC) 2007-2009. With tens of millions of daily transactions, these data catapult central banks and regulators into the realm of big data.  In our recent Financial Stability Paper, we investigate the impact of the de-pegging in the euro-Swiss franc (EURCHF) market by the Swiss National Bank (SNB) in the morning of 15 January 2015. We reconstruct detailed trading and exposure networks between counterparties and show how these can be used to understand unprecedented intraday price movements, changing liquidity conditions and increased levels of market fragmentation over a longer period.

Continue reading “Big Data jigsaws for Central Banks – the impact of the Swiss franc de-pegging”

The U Word: What can text analytics tell us about how far uncertainty has risen during 2016?

Alastair Cunningham, David Bradnum and Alastair Firrell.

Uncertainty is a hot topic for economists at the moment.  Have business leaders become more uncertain as a result of the EU referendum?  If so, has that uncertainty had any effect on their plans?   The Bank’s analysts look at lots of measures of economic uncertainty, from complex financial market metrics to how often newspaper articles mention it.  But few of those measures are sourced directly from the trading businesses up and down the country whose investment and employment plans affect the UK economy.  This blog reports on recent efforts to draw out what the Bank’s wide network of business contacts are telling us about uncertainty – comparing what we’re hearing now to trends seen in recent years.

Continue reading “The U Word: What can text analytics tell us about how far uncertainty has risen during 2016?”

Tweets, Runs and the Minnesota Vikings

David Bradnum, Christopher Lovell, Pedro Santos and Nick Vaughan.

Could Twitter help predict a bank run? That was the question a group of us were tasked with answering in the run up to the Scottish independence referendum. To investigate, we built an experimental system in just a few days, to collect and analyse tweets in real time. In the end, fears of a bank run were not realised, so the jury is still out on Twitter. But even so we learnt a lot about social media analysis (and a little about American Football) and argue that text analytics more generally has much potential for central banks.

Continue reading “Tweets, Runs and the Minnesota Vikings”

It’s a model – but is it looking good? When banks’ internal models may be more style than substance.

Tobias Neumann.

Most large banks assess the capital they need for regulatory purposes using ‘internal models’.  The idea is that banks are in a better position to judge the risks on their own balance sheets.  But there are two fundamental problems that can arise when it comes to modelling.  The first is complexity.  We live in a complex world, but does that mean a complex model is always the best way of dealing with it? Probably not. The second problem is a lack of ‘events’ (eg defaults).  If we cannot observe an event, it is difficult to model it credibly, so internal models may not work well.

Continue reading “It’s a model – but is it looking good? When banks’ internal models may be more style than substance.”