How did the Bank’s forecasts perform before, during and after the crisis?

Nicholas Fawcett, Riccardo Masolo, Lena Koerber, Matt Waldron.

Introduction: forecasting and policy-making

Forecasting is difficult, especially when it concerns the future.  If we needed a reminder, the 2008-09 financial crisis demonstrated that macroeconomic forecasts can be highly inaccurate when the economy is buffeted by large shocks (see, for example, Figure 1).  But that is not a good reason to avoid forecasting: monetary policy takes time to work, so forecasts are indispensable in monetary policymaking.  Instead, we need to understand how different models behave in the eye of the storm: do some cope better during breaks and crises than others?  And can we make better forecasts by using information that is not normally included in economic models?

In order to shed light on these issues, we assessed the performance of two of the forecast models used in the Bank relative to the MPC’s judgemental forecasts in a recent working paper.  All of the forecasts fared badly during the crisis.  But we find that a combination of expert judgement and other information can improve model forecasts, especially up to a year ahead.

Figure 1: Real-time forecasts in 2008Q4

a) Annual inflationfig1a

b) Annual GDP growthfig1b
ONS data in black; COMPASS density forecast in blue; Statistical Suite forecasts in green; Inflation Report forecasts in red.

The forecasts and models

The MPC have produced forecasts for inflation and GDP growth since monetary policy independence in 1997.  These represent – in the words of the Inflation Report foreword – ‘the MPC’s best collective judgement about the most likely paths for inflation and output’.  As this makes clear, these forecasts are judgemental rather than the product of one or even several models.  But the outputs of models, including model forecasts, form an important input to the MPC’s deliberations.

Chief among the staff forecasting models is the Central Organising Model for Projection Analysis and Scenario Simulation (COMPASS).  It is a fairly standard New Keynesian Dynamic Stochastic General Equilibrium (DSGE) model, in which monetary policy can influence output and inflation in the short run, because prices are assumed to be sticky.  As its name suggests, COMPASS is used as the `base’ for constructing the Committee’s judgemental forecasts with judgements layered on top of the model’s own forecast.  Like all models within this class, COMPASS has the advantage of articulating a clear economic structure (which permits shock identification and storytelling), but that comes at the (potential) cost of less accurate forecasting than other model classes might deliver.

Reflecting that, an important part of the suite of models used alongside COMPASS is a set of atheoretic models, the Suite of Statistical Forecast Models (see Kapetanios, Labhard and Price, 2008, Economic Modeling).  These models are designed to produce forecasts of inflation and GDP growth without a particular economic structure in mind.  All models in the set are estimated and contain ‘reduced-form’ relationships between macroeconomic variables.  The forecasts from these modes are combined to produce one central forecast, in a way that maximises forecast accuracy.  This provides a degree of robustness to forecast failure, as first suggested by Bates and Granger (1974) (although if all the models in the Suite are susceptible to failing at the same time, combining them will not alleviate this, as pointed out by Reade and Hendry in a 2009 VoxEU post).

Evaluating the forecasts

In order to make the comparison between the models and the MPC’s forecasts fair, the model forecasts were constructed (and the models re-estimated) as if in ‘real time’: using only information that was available at the time each of the MPC forecasts was made.  This entailed the construction of an archive of data for each of the variables in COMPASS, and we have released this alongside the working paper, for researchers interested in estimating their own models

The evaluation covers forecasts made between 2000Q1 to 2013Q1, focusing on forecasts for inflation and GDP growth.  Since the Inflation Report includes the central forecast of both variables – the modes – and also the uncertainty surrounding those, we can evaluate point forecasts and the complete probability densities.  This enables us to assess whether the forecasts were correct on average, and whether they correctly captured the underlying uncertainty.

A summary of the main results (some of which can be seen from Figure 2) is as follows:

  1. There is no unambiguous ‘winner’: none of the three forecasting methods produces more accurate forecasts for both GDP and inflation over all forecast horizons, and consistently over the sample.
  2. The MPC’s Inflation Report forecasts are the most accurate over the first year of the forecast.
  3. Beyond the first year, the model forecasts tend to do better: the statistical suite is most accurate for GDP growth and COMPASS for inflation.
  4. The accuracy of forecasts from all three sources deteriorates following the financial crisis.
  5. This deterioration is particularly marked for COMPASS’ GDP growth forecasts.

It should be noted that not all of these differences are statistically significant.  It should also be noted that forecast accuracy in of itself in not the only relevant criterion for assessing the utility of a structural model like COMPASS, which also has a role to play, for example, in providing scenario analysis.  Nevertheless, there are some general lessons that can be drawn from this exercise, which we discuss below.

Figure 2: Root mean squared forecast errors at different forecast horizons

a) Annual Inflation

fig2a

b) Annual GDP growth

fig2b

COMPASS forecasts in blue with triangles; COMPASS augmented with a time-varying productivity trend in blue with crosses; COMPASS augmented with survey growth expectations in blue with circles; statistical suite in green; Inflation Report in red. The sample period starts in 2000Q1 and ends in 2013Q1.

The role of information and judgement in forecasting

One of the advantages that judgemental forecasts have over model forecasts is that they can take into account timely data (like survey indicators) and expert judgement, which may be particularly important in improving the accuracy of short-term forecasts.  We find evidence that that is indeed the case: COMPASS’ forecasts of inflation up to one year ahead are competitive with the MPC’s IR forecasts if the forecasts are conditioned on the one and two quarter ahead staff Short Term Inflation Forecast (not shown in Figure 2, see working paper for details).

None of the models we evaluated coped well during the financial crisis.  This underscores the role that large structural breaks can have in contributing to forecast failure, even if they turn out to be temporary (see also this Bank Underground post).  It is probably too soon to tell whether the financial crisis depressed the UK’s trend growth rate.  But the drop in accuracy of the GDP growth forecasts – in particular those from COMPASS – was clear soon after the crisis.  This forecast failure could have been caused by a failure to recognise a break in the trend rate of growth.  We explored this by allowing the productivity growth trend to vary over time (as a moving average) in a variant of the COMPASS model.  This somewhat improves the accuracy of COMPASS’ GDP growth forecasts (but does not overturn the result that the drop in GDP forecast accuracy is more marked for COMPASS).

Another hypothesis for the deterioration in the accuracy of COMPASS’ GDP growth forecasts following the financial crisis is the limited size and nature of the information set from which the model forecasts are produced.  In particular, COMPASS does not contain financial frictions and so its information set does not include does not include indicators of credit availability.  While this does not seem to matter in normal times, it may be particularly important during a financial crisis when financial indicators and/or summary indicators (like surveys) may respond in a much more timely manner to developments in the economy (see, for example, Forecasting with the FRBNY DSGE model).  Moreover, survey expectations may have reflected – at least to some extent – the exceptional character of the Great Recession, which is not well-explained by COMPASS whose properties are determined by parameters estimated using data over the Great Stability (when shocks were small and their effects were very short-lived).  For these reasons we explored the extent to which we could improve forecast accuracy by incorporating information from professional surveys (available at the forecast origin).  We found strong evidence that additional information can improve forecast accuracy: the COMPASS GDP growth forecasts are materially improved (and become competitive with the MPC’s forecasts) when the model includes a survey measure of growth expectations at the cost of a minor deterioration in the accuracy of the model’s inflation forecast.

Conclusions

We draw the following conclusions (some of which are themes in the literature).  First, short-term forecasting using a combination of timely data, statistical methods and expert judgement is worth the effort because it can materially outperform standard time series model forecasts (including those from DSGE models).  Second, model-based forecasts perform relatively better (and can add value to judgemental forecasts) as the forecast horizon is extended, say beyond one year.  Third, forecasts of UK GDP growth from standard DSGE models fare particularly badly during and after financial crises, and possibly in the presence of structural breaks more broadly.  An obvious conclusion to draw is that these models might perform better if they contained meaningful financial frictions.  But even without those, their GDP growth forecasts can be materially improved by incorporating summary data like surveys in the information sets from which their forecasts are produced.

Nicholas Fawcett works in the Bank’s External Monetary Policy Committee Unit, Riccardo Masolo works in the Bank’s Conjunctural Assessment and Projections Division, Lena Koerber works in the Bank’s Conjunctural Assessment and Projections Division and Matt Waldron works in the Bank’s Monetary Assessment and Strategy Division.

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk

Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.