Enthusiasts use the tiny Raspberry Pi computers for many things. Fun ones include garage door opening, retro gaming, a voice-activated tea maker, live images from near-space and even a GPS kitten tracker. These computers are primarily educational but do anything a normal computer does, so users also send email, play Minecraft, program and (it turns out) do macroeconomic modelling.
Of all of these things, this last one might seem on the fringes of the far-fetched (the kitten thing notwithstanding). There was a time when macro-modelling was an expensive business. You needed economic theorists, at least a couple of tame econometricians, someone who could actually program, research assistants to delve into the long-abandoned stacks of library basements to source data, and alchemists well-versed in the dark arts of forecasting and simulation. Nowadays: not so much. Of course, expertise in each of these areas doesn’t come cheap – dark arts need practising after all – but the economics of macroeconomic modelling has undergone a revolution. Many of the world’s leading models are given away free via easily-accessible working papers. Now no-one need hire an army of clever people to build a model when these good folk have already provided much of what is required. Reading a book like Herbst and Schorfheide (2016) will make it easier still.
But in the old days, even assuming you had a model, the really hard stuff to get right – because you need skills that aren’t exactly part of the core economics curriculum – is the software. Perhaps even more remarkably this has moved from being proprietary and expensive to being given away for free too. Building and using a model – or rather re-building and adapting someone else’s model – should be pretty easy, and pretty cheap. And data too has undergone its own accessibility revolution.
All of this, of course, is in principle; doing it might be a bit trickier. A nascent modeller might see a lot of metaphorical hills to climb. So I aim to show you just far we’ve come, building a modern macroeconomic model from freely-available sources then estimating and simulating it using freely-available software on freely-available data. We’ve actually come so far that you can do all this on the $5 Raspberry Pi Zero. And as anything running Debian Linux will do, maybe you could just dust off that forgotten grey box under a plant in the corner.
I need a suitable model and COMPASS, the Bank of England’s main forecasting and policy simulation model, fits the bill. Although there is no public-domain code available there is a properly documented equation listing. It just needs typing in. My version of the published model can be found here. The supplied code is annotated to highlight some minor mistakes within the original working paper. It turns out that this model really is available to anyone with the necessary patience to type it in.
Next, I should validate the model. A feature of COMPASS is that it is linearised, implying that the impact of an additive change doesn’t depend on the underlying data. So if the steady state is known (and it is) the model can be analysed using impulse response analysis. I need to check that my code replicates the impulse responses in the documentation. Doing this needs software. Of the available alternatives Dynare is particularly attractive. Dynare facilitates all kinds of economic modelling, and is at once comprehensive, cutting edge and easy to get to grips with. It is a fantastic resource (PhD students love it), provided entirely for free by a dedicated and talented team of researchers centred on CEPREMAP whose community service puts us all to shame. It has become something of the lingua franca of macro-modelling. It runs under Octave, a free multi-platform computational package, and can both simulate a model and (together with suitable data) estimate the parameters. Both Octave and Dynare take a couple of minutes to install using apt-get. The supplied model code should then just run in Dynare (provided – Geek alert – you have an internet connection and Octave is switched to use gnuplot rather than the default openGL, but this is easy to do).
Running the code produces lots of impulse responses, including Figure 1 below which shows the effects of a monetary policy hike on the policy rate , GDP, exports, CPI and investment imports. This should be compared with Chart B.10 in the COMPASS Appendix. If you take a look I hope you’ll agree things look pretty good so far.
Figure 1: Impulse responses to a monetary policy shock
The next stage is estimation, which requires data. Getting suitable data isn’t always easy. Herbst and Schorfheide, who use US data, list the FRED codes for the models they analyse making it a snap. Hopefully this marks a trend and more modellers will give online data sources. Here in the UK our statistical office has a brand new website making this easier than ever before. But even better (for this blog anyway) the Bank’s evaluation of the forecast performance of COMPASS is accompanied by a real-time data set. This can be used to estimate the model, using the exact vintage of data available for a range of dates, although the most recent data available is for 2012, quarter 3.
Estimating this kind of model used to be a nightmare. Bayesian methods are used for COMPASS, where expert priors are combined with data to produce posterior estimates. This isn’t the place for a Bayesian estimation tutorial, so see Herbst and Schorfheide if you are interested. After a couple of trial runs and some tweaks (noted in the code) it worked just fine. In Table 1 are selected parameter estimates (the column marked ‘Posterior mean’) and estimation intervals produced by Dynare together with the priors on the November 2009 vintage dataset (attached with the model code). As this is not the same data used in the original working paper (and there are some differences in the estimation procedure) we shouldn’t expect to get exactly the same answers, but a comparison with Table 3 in the working paper shows substantial qualitative agreement and plausible posterior estimates.
Table 1: Selected parameter estimates
Figure 2: GDP growth forecast and uncertainty bands, November 2009 data setFinally, forecasting. Actually Dynare will simultaneously estimate parameters and perform out-of-sample forecasts of specified variables. Of course, any ‘forecast’ using the data used for the estimates can only be of the past as the published data aren’t up to date. Figure 2 shows a forecast fanchart of quarterly GDP growth, using the same November 2009 data as for the estimates. Of course this isn’t the forecast that the Bank of England made then; for one thing it is completely free of judgement and a sensible forecaster would make use of lots of extra data and alternative forecast models, and for another COMPASS didn’t exist. But it does tell you what the model left to its own devices (and with the necessary time machine) could have produced.
And there you have it. Although practically quite a bit of expertise is still needed, the bottom line is how easy (and cheap) this is to do. All the simulations, parameter estimates and forecasts shown here really were done on a Raspberry Pi Zero, although Figure 2 was drawn on another computer. A Zero isn’t exactly quick, so you could splash out and get a Raspberry Pi 3 where all the numbers shown take under two hours to compute – which frankly I’d have bitten your arm off for in 2004. Even then it’s slow by conventional desktop standards but is, unequivocally, fab and crazy good fun.
Macro-modelling has undergone a complete revolution. I built COMPASS, but there are dozens of models you could try out if you look around, some already coded up in Dynare. Everything needed to build, simulate and estimate many of the models used in economics departments and central banks the world over is freely available. There are no barriers to entering the macro-modelling world left, not even the cost of the computer. You don’t have to run it on a Raspberry Pi but if you do it is remarkably straightforward, ridiculously entertaining and properly educational.
So go on, have a go – you know you want to.
Andrew Blake works in the Bank’s Centre for Central Banking Studies.
If you want to get in touch, please email us at email@example.com. You are also welcome to leave a comment below. Comments are moderated and will not appear until they have been approved.
Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.