All shocks are different: insights from sentiment and topic analysis using LLMs

Iulia Bucur and Ed Hill

Modern language models – think OpenAI’s GPTs, Google’s Gemini or DeepSeek – are powerful tools: but how can we use them in economic policymaking? Economic analysis often relies on decompositions to understand macroeconomic data and inform counterfactuals. But these decompositions are typically obtained from numerical data or macroeconomic models and so may overlook nuanced insights embedded in unstructured text. We propose decomposing the metrics which Large Language Models (LLMs) can derive from text data to offer insights from large collections of documents in a highly interpretable format. This approach aims to bridge the gap between natural language processing (NLP) techniques and economic decision-making, offering a richer, more context-aware understanding of complex economic phenomena.

Continue reading “All shocks are different: insights from sentiment and topic analysis using LLMs”

Open letters: Laying bare linguistic patterns in PRA messages using machine learning

David Bholat and James Brookes

In a recent research paper, we show that the way supervisors write to banks and building societies (hereafter ‘banks’) has changed since the financial crisis. Supervisors now adopt a more directive, forward-looking, complex and formal style than they did before the financial crisis. We also show that their language and linguistic style is related to the nature of the bank. For instance, banks that are closest to failure get letters that have a lot of risk-related language in them. In this blog, we discuss the linguistic features that most sharply distinguish different types of letters, and the machine learning algorithm we used to arrive at our conclusions.

Continue reading “Open letters: Laying bare linguistic patterns in PRA messages using machine learning”