All shocks are different: insights from sentiment and topic analysis using LLMs

Iulia Bucur and Ed Hill

Modern language models – think OpenAI’s GPTs, Google’s Gemini or DeepSeek – are powerful tools: but how can we use them in economic policymaking? Economic analysis often relies on decompositions to understand macroeconomic data and inform counterfactuals. But these decompositions are typically obtained from numerical data or macroeconomic models and so may overlook nuanced insights embedded in unstructured text. We propose decomposing the metrics which Large Language Models (LLMs) can derive from text data to offer insights from large collections of documents in a highly interpretable format. This approach aims to bridge the gap between natural language processing (NLP) techniques and economic decision-making, offering a richer, more context-aware understanding of complex economic phenomena.

Continue reading “All shocks are different: insights from sentiment and topic analysis using LLMs”

Do large and small banks need different prudential rules?

Austen Saunders and Matthew Willison

Banks come in different shapes and sizes. Do prudential regulations that work well for big banks work as well for small ones? To help us find out, we measure the effectiveness of some key regulatory ratios as predictors of bank failure. We do so using ‘receiver operating characteristic’ – or ‘ROC’ – analysis of simple threshold rules. When we do this, we find that we can use the ratios we test to make better predictions for large banks than for small ones. This provides evidence that an efficient set of regulations for large banks might not be as efficient for small ones.

Continue reading “Do large and small banks need different prudential rules?”