Category Archives: New Methodologies

The UK’s productivity puzzle is in the top tail of the distribution

Patrick Schneider

UK productivity growth has been puzzlingly slow since the crisis. After averaging 2% every year in the pre-crisis decade, growth in labour productivity (output per hour worked) has slowed to an average of only 0.5%. Extensive research and commentary on the productivity puzzles has suggested myriad causes for the malaise – including ‘zombie’ firms hoarding resources, sluggish investment in the face of uncertainty, mismeasurement and more – and have dismissed others that no longer seem plausible – including temporary labour hoarding. Using firm-level data, I show that slower aggregate growth is entirely driven by the more productive firms in the economy.

In apparent contrast to my results, recent arguments have focused on the role that the weakest firms play in keeping down aggregate productivity. For example, Andy Haldane highlighted the ‘long tail of low-productivity companies’, which drags on the aggregate, in a speech last year and the OECD has published several papers (e.g. here [pdf] and here) analysing the divergence of the top end of the productivity distribution (‘frontier’ firms) from the rest (‘laggards’).

These ideas have been very influential. But unproductive firms are not responsible for all of the UK’s issues. Using a new method that links aggregate productivity to its distribution across workers, I found that the slowdown in productivity growth is isolated in the top tail of the distribution of productivity across workers. The most productive firms are failing to improve on each other at the same rate as their predecessors did.

You can see this in Chart 1 – the two lines track the average, annual change in productivity at different parts of the distribution across workers, before and after the crisis. The post-crisis line is well below the pre-crisis one, but only toward the right, the top tail of the distribution. Surprisingly, the bottom end of the distribution appears to have been growing faster in recent years than it was leading up to the crisis.

Chart 1: Av. annual change in productivity, by centile of the productivity level distribution

The beauty of this chart is that the average height of each line is about equal to the change in the aggregate – so we can see which part of the distribution is moving (or not) to cause the aggregate to move. And, again, it’s the top end that’s doing the work. This fact doesn’t explain the puzzle. As with any statistical decomposition, we’re brought no closer to the why of the issue, but the where is a little clearer.

That’s about all I have to say. The rest of this post provides the working behind Chart 1. In the next section, I sketch the method I used, and then I take it to UK data, giving a bit more flavour to the result, ending up right back at Chart 1.

One thing this extra detail does is to confirm the headline results in Andy’s speech and the OECD’s papers. You can see the long tail of low productivity firms in Chart 4, and the divergence of the top end of the distribution from the rest in Chart 1. But in the data we have, these features are always there. So they can’t be to blame for the slowdown in growth. Indeed, if anything, the increasing dispersion is where aggregate growth usually comes from. In this light, the UK growth puzzle is there because the increasing dispersion has slowed down since the crisis.

The method (for the interested reader)

So how did I get to Chart 1?

Aggregate labour-productivity (value-added per worker) is usually measured by taking a labour-weighted average of firm-level productivity.  But it can also be approximated by the average of a number (Q) of equally spaced quantiles (as I outline in a forthcoming working paper). So, for example, we could measure aggregate productivity by taking the average of the 1st through 99th centiles of the distribution. More explicitly

\Pi = \frac{VA}{L}=\sum_{i} \frac{l_i}{L} \pi_i \approx \frac{1}{Q} \sum_{j} q^{\pi}(j)

where i and j represent firms and quantiles, \pi is productivity, va is value-added and l is labour. q^{\pi}(j) is the quantile function of productivity across workers, it picks out the productivity of the worker who is more productive than exactly j/Q of the others.

(Aside: we need to be careful to work with the correct distribution. The key is that aggregate productivity is the mean of the distribution across workers. Because we usually measure productivity at the firm level, we need to use labour weights to adjust the calculation to the right distribution.)

With this approximation, we can measure changes in the aggregate by averaging over changes in the centiles as well. Using the formula below, we can see which part of the distribution is moving (or not) to cause the aggregate to move.

\Delta \Pi \approx \frac{1}{Q} \sum_{j} \Delta q^{\pi}(j)

Note that we’re tracking growth in the distribution here – not growth in firms, nor the distribution of firm growth, so it could be that parts of the distribution move around or stay still, but the firms that are located there are shifting around a lot.

The data (for the still interested reader)

So now that we have the method in hand, let’s apply it to UK data.

I’m using ONS firm-level microdata (combining ARDx with ABS 2015), and measuring productivity as real value-added (using 2-digit sector deflators) per employee.

This dataset doesn’t cover the whole economy – the surveys only try to cover the non-financial business economy (which is a shame given the importance of finance in the growth puzzle) and some other sectors only pop up from 2009, so I had to cut these from the whole dataset for consistency. Because of these survey limitations, the ‘aggregate’ in the below results is a subset of the overall UK aggregate economy.

OK so let’s first check how close the approximation is. The charts below show actual productivity and its growth as measured from the micro-data, compared with the approximation. As you can see in Chart 2, the approximation has a consistent negative bias, because cutting out the top 1% drops some very large outliers, but the growth path is about right (Chart 3).

Chart 2: Aggregate productivity and its centile approximation (level)

Chart 3: Aggregate productivity and its centile approximation (growth)

 

Now that we know the approximation matches the aggregate patterns pretty well, we can look at the distributions underlying these aggregate figures. And we find:

1. The productivity distribution is highly skewed (chart 4), so the top tail has a very strong influence on the aggregate.

The distribution has a long tail of workers in unproductive firms at the bottom, and workers in a collection of the ‘happy few’ extremely productive firms at the top. This is a well-known feature of the productivity distribution, regardless of whether we weight by labour or not. An implication of this shape is that the top tail has a very strong influence on the level of aggregate productivity in any given year, just as large outliers will push up on any average.

Chart 4: Most of the distribution is stable over time

2. The top tail has an even greater influence on changes in aggregate productivity from year to year.

Over 70% of the growth in aggregate productivity between 2003 and 2015 was driven by the top two deciles.  This is because the rest of the distribution doesn’t move around much; at least not in magnitudes that compare to movements in the upper tail (chart 4). Incidentally, this is the same thing as the OECD’s observation that the top tail is diverging from the rest.

3. The productivity puzzle (slower aggregate growth after the crisis than before) is located in the top tail of the distribution.

We can locate the growth puzzle by comparing changes in the pre- and post-crisis periods. Chart 5 is a reproduction of chart 1 with an extra line; it shows the average annual change of each centile over three distinct periods – the pre-crisis years (2004-07), the crisis (2008-09) and the post-crisis period (2010-15).

To read the chart, pick a centile and the three lines show how that part of the distribution changed, on average, over these different periods. For example, the median grew at about the same rate pre- and post-crisis, and had quite a drop during the crisis.

Chart 5: Average annual change in productivity by centile

The growth puzzle is the gap between the pre- and post-crisis lines. The chart shows that lower sections of the distribution have actually grown faster post-crisis than they did before it (pink line above the navy) and so cannot be driving the puzzle. By contrast, the top two deciles grew far slower (pink line below the navy) and therefore this is where the growth puzzle is located.

This work contains statistical data from ONS which is Crown Copyright. The use of the ONS statistical data in this work does not imply the endorsement of the ONS in relation to the interpretation or analysis of the statistical data. This work uses research datasets which may not exactly reproduce National Statistics aggregates.

Patrick Schneider works in the Bank’s Structural Economic Analysis Division.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied.Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

8 Comments

Filed under Macroeconomics, New Methodologies

Open letters: Laying bare linguistic patterns in PRA messages using machine learning

David Bholat and James Brookes

In a recent research paper, we show that the way supervisors write to banks and building societies (hereafter ‘banks’) has changed since the financial crisis. Supervisors now adopt a more directive, forward-looking, complex and formal style than they did before the financial crisis. We also show that their language and linguistic style is related to the nature of the bank. For instance, banks that are closest to failure get letters that have a lot of risk-related language in them. In this blog, we discuss the linguistic features that most sharply distinguish different types of letters, and the machine learning algorithm we used to arrive at our conclusions.

Continue reading

Comments Off on Open letters: Laying bare linguistic patterns in PRA messages using machine learning

Filed under Banking Supervision, Microprudential Regulation, New Methodologies, Text mining

Can central bankers become Superforecasters?

Aakash Mankodi and Tim Pike

Tetlock and Gardner’s acclaimed work on Superforecasting provides a compelling case for seeing forecasting as a skill that can be improved, and one that is related to the behavioural traits of the forecaster. These so-called Superforecasters have in recent years been pitted against experts ranging from U.S intelligence analysts to participants in the World Economic Forum, and have performed on par or better by accurately predicting the outcomes of a broad range of questions. Sounds like music to a central banker’s ears? In this post, we examine the traits of these individuals, compare them with economic forecasting and draw some related lessons. We conclude that considering the principles and applications of Superforecasting can enhance the work of central bank forecasting.

Continue reading

3 Comments

Filed under Macroeconomics, Monetary Policy, New Methodologies

The 2016 Sterling Flash Episode

Joseph Noss, Liam Crowley-Reidy and Lucas Pedace

Continue reading

Comments Off on The 2016 Sterling Flash Episode

Filed under Currency, Financial Markets, Financial Stability, New Methodologies

Completing Correlation Matrices

Dan Georgescu and Nicholas J. Higham

Correlation matrices arise in many applications to model the dependence between variables. Where there is incomplete or missing information for the variables, this may lead to missing values in the correlation matrix itself, and the problem of how to complete the matrix. We show that some of these practical problems can be solved explicitly, via simple formulae, and we explain how to use mathematical tools to solve the more general problem where explicit solutions may not exist. “Simple” is, of course, a relative term, and the underlying matrix algebra and optimization necessarily makes this article more mathematically sophisticated than the typical Bank Underground post.

Continue reading

1 Comment

Filed under Insurance, New Methodologies

Stirred, not shaken: how market interest rates have been reacting to economic data surprises

Jeremy Franklin, Scott Woldum, Oliver Wood and Alex Parsons

How do markets react to the release of economic data? We use a set of machine learning and statistical algorithms to try to find out.  In the period since the EU referendum, we find that UK data outturns have generally been more positive than market expectations immediately prior to their release. At the same time, the responsiveness of market interest rates to those data surprises fell below historic averages.  The sensitivity of market rates has also been below historic averages in the US and Euro area, suggesting international factors may also have played a role. But there are some signs that the sensitivity has increased over the past year in the UK.

Continue reading

2 Comments

Filed under Financial Markets, Macroeconomics, New Methodologies

Is the economy suffering from the crisis of attention?

Dan Nixon

Smartphone apps and newsfeeds are designed to constantly grab our attention. And research suggests we’re distracted nearly 50% of the time. Could this be weighing down on productivity? And why is the crisis of attention particularly concerning in the context of the rise of AI and the need, therefore, to cultivate distinctively human qualities?

Continue reading

9 Comments

Filed under Macroeconomics, New Methodologies

Bitesize: Flourishing FinTech

Aidan Saggers and Chiranjit Chakraborty

Investment in the Financial Technology (FinTech) industry has increased rapidly post crisis and globalisation is apparent with many investors funding companies far from their own physical locations.  From Crunchbase data we gathered all the venture capital investments in FinTech start-up firms from 2010 to 2014 and created network diagrams for each year.
Continue reading

2 Comments

Filed under International Economics, Market Infrastructure, New Methodologies

New machines for The Old Lady

Chiranjit Chakraborty and Andreas Joseph

Rapid advances in analytical modelling and information processing capabilities, particularly in machine learning (ML) and artificial intelligence (AI), combined with ever more granular data are currently transforming many aspects of everyday life and work. In this blog post we give a brief overview of basic concepts of ML and potential applications at central banks based on our research. We demonstrate how an artificial neural network (NN) can be used for inflation forecasting which lies at the heart of modern central banking.   We show how its structure can help to understand model reactions. The NN generally outperforms more conventional models. However, it struggles to cope with the unseen post-crises situation which highlights the care needed when considering new modelling approaches.

Continue reading

Comments Off on New machines for The Old Lady

Filed under Microprudential Regulation, Monetary Policy, New Methodologies

Does domestic uncertainty really matter for the economy?

Ambrogio Cesa-Bianchi , Chris Redl,  Andrej Sokol and Gregory Thwaites

Volatile economic data or political events can lead to heightened uncertainty. This can then weigh on households’ and firms’ spending and investment decisions. We revisit the question of how uncertainty affects the UK economy, by constructing new measures of uncertainty and quantifying their effects on economic activity. We find that UK uncertainty depresses domestic activity only insofar as it is driven by developments overseas, and that other changes in uncertainty about the UK real economy have very little effect.

Continue reading

2 Comments

Filed under Financial Markets, International Economics, Macroeconomics, New Methodologies