Iulia Bucur and Ed Hill

Modern language models – think OpenAI’s GPTs, Google’s Gemini or DeepSeek – are powerful tools: but how can we use them in economic policymaking? Economic analysis often relies on decompositions to understand macroeconomic data and inform counterfactuals. But these decompositions are typically obtained from numerical data or macroeconomic models and so may overlook nuanced insights embedded in unstructured text. We propose decomposing the metrics which Large Language Models (LLMs) can derive from text data to offer insights from large collections of documents in a highly interpretable format. This approach aims to bridge the gap between natural language processing (NLP) techniques and economic decision-making, offering a richer, more context-aware understanding of complex economic phenomena.
Continue reading “All shocks are different: insights from sentiment and topic analysis using LLMs”