Machine learning the news for better macroeconomic forecasting

Arthur Turrell, Eleni Kalamara, Chris Redl, George Kapetanios and Sujit Kapadia

Every day, journalists collate information about the world and, with nimble keystrokes, re-express it succinctly as newspaper copy. Events about the macroeconomy are no exception. So could there be additional valuable information about the economy contained in the news? In a recent research paper, we ask whether newspaper stories could help to predict future macroeconomic developments. We find that news can be used to enhance statistical economic forecasts of growth, inflation and unemployment — but only by using supervised machine learning techniques. We also find that the biggest forecast improvements occur when it matters most — during stressed periods.

Newspaper articles are different from the official data produced by statistical agencies such as the ONS in several respects. Official data, like GDP, have a clear meaning and method of construction, whereas newspaper articles cover everything and anything.

But newspaper articles can potentially augment official statistics in forecasts because of three key properties: they are timely, reflecting developments as they happen; they may affect the economic behaviour of the people reading them; and they cover developments that traditional statistics aren’t designed to tell us about (‘unknown unknowns’ in the words of Donald Rumsfeld). For instance, gathering economic storm clouds could take many forms, but we think that journalists will always write about them if they have the potential to affect the national economy — whether they are captured in national statistics or not.

Policymakers already use a vast range of tools and information, including the official data released by statistical agencies, to make their judgements. But anything that can expand the pool of data further, and so allow them to be better forewarned of what might be ahead, is welcome. Of course, policymakers already read the news and factor it into their judgements — here we are formalising that process using statistical models. However, these models have the advantage that they can ingest more articles than any one person could read.

To test whether newspaper text contains useful information, we took half a million articles from three popular British newspapers — The Daily Mail, The Daily Mirror, and The Guardian — and tried numerous methods to forecast GDP, unemployment, inflation, and more. We chose these three because they broadly reflect the differing styles and readership of UK newspapers, and because they have long back-runs available in digital formats.

To show whether newspaper text contains useful information on its own (for now ignoring the official data), we took some of the most popular ways of turning text into sentiment indices, for example counting positive and negative words using the Loughran and McDonald dictionary (see our working paper for a full list of indices), and applied them to articles in The Daily Mail. In Figure 1, we plot these sentiment indices against indicators often used to gauge sentiment about the economy, for example the Purchasing Managers’ Index or PMI. The blue solid line is the mean of the text-based measures; the dotted line shows the mean of indicators often used in policy; and the pink shaded region shows the minimum to maximum swathe of these.

Figure 1

Figure 1 makes it clear that newspaper text-based sentiment closely tracks other measures of economic sentiment. We also see that it often leads other indicators of sentiment — it anticipates the downturn in sentiment during the 200708 Global Financial Crisis and the subsequent recovery. This is a hint that text might be useful in economic forecasts.

However, newspaper text must provide additional information relative to the standard economic data that is used by statistical models if it is to be useful. Figure 1 just shows that it contains a signal, but it could be the same signal that’s captured by existing data. And, indeed, when we ran forecasting exercises using the popular existing methods of gauging sentiment and uncertainty, we found that the vast majority did not improve on forecasts that took account of standard economic data, which in our case included real output, labour market indicators, inflation, business and household expectations, and more.

So, to get the best out of text, we came up with an alternative based on machine learning. Rather than dictate how sentiment is determined by text, for instance by assigning ‘happy’ a score of +1 and ‘sad’ a score of -1, as is done in most current methods, we fed the counts of many different words related to the economy into a neural network (a type of machine learning algorithm) and let it decide what words to put weight on to forecast the future. We used this trained neural network to forecast economic activity out-of-sample up to 9 months into the future. We found that, across newspapers, across forecasting horizons and across macroeconomic variables, the combination of text, standard economic data and a neural network was able to improve upon similar forecasts that just used the standard data. The performance of the more sophisticated approach was fairly similar regardless of which newspaper the text came from. We tried numerous other machine learning approaches and not all were able to augment forecasts relative to the benchmark — but those that were put a little weight on a lot of different terms from the text. The neural network provided the best overall forecast improvements.

Figure 2

As a simple demonstration, Figure 2 shows an out-of-sample forecast by a neural network of GDP growth at 3 months ahead — here RMSE stands for root mean squared error, and a smaller RMSE means better forecast performance. The neural net uses text from The Daily Mail and GDP from the previous month (using the ONS’ monthly GDP series). The benchmark forecast uses ordinary least squares (OLS) and GDP from the previous period as there is overwhelming evidence that, on average, and across series and time periods, OLS is tough to beat (but the results are the same if the benchmark uses a neural network rather than OLS). The figure shows that adding text to existing data can improve forecast performance.

Exploring the channels behind the success of forecasts that include text in this way is outside of the scope of the research but there is a plausible story that no news is good news and — conversely — bad economic news that’s brewing is news, and journalists will report on it. And, as noted, it’s also possible that — like Keynes’ animal spirits or Shiller’s ideas about irrational exuberance and viral economic narratives — newspapers themselves play a role in forming expectations and shaping economic behaviour.

However, we did explore when it is that newspaper text adds the most forecasting power, and it seems that it’s most potent at times of economic change, for instance during the Global Financial Crisis. So if text is trying to tell a story about an incoming economic storm, it’s worth taking it seriously — and such periods of change and stress are precisely when good economic forecasts matter the most.

US President Bill Clinton once said, “Follow the trend lines, not the headlines” but, with the help of machine learning, perhaps we can do both?

Arthur Turrell works in the Bank’s Advanced Analytics Division, Eleni Kalamara works at King’s College London, Chris Redl works at the IMF, George Kapetanios works at King’s College London and Sujit Kapadia works at the ECB.

If you want to get in touch, please email us at or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge — or support — prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.