Professor
2 peer-certified preprints
Large Language Models (LLMs) exhibit diverse and stable risk preferences in economic decision tasks, yet the drivers of this variation are unclear. Studying 50 LLMs, we show that alignment tuning for harmlessness, helpfulness and honesty systematically increases risk aversion. A ten percent increase in ethics scores reduces risk appetite by two to eight percent. This induced caution persists against prompts and affects economic forecasts. Alignment therefore promotes safety but can dampen valuable risk taking, revealing a tradeoff risking suboptimal economic outcomes. Our framework provides an adaptable and enduring benchmark for tracking model risk preferences and this emerging tradeoff.
This review article synthesizes the burgeoning literature on the intersection of (generative) ar- tificial intelligence (AI) and financial economics. We organize our review around six key areas: (1) the emergent role of generative AI as analytic tools, external shocks to the economy, and au- tonomous economic agents; (2) corporate finance, focusing on how firms respond to and benefit from AI; (3) asset pricing, examining how AI brings novel methodologies for return predictability, stochastic discount factor estimation, and investment; (4) household finance, investigating how AI promotes financial inclusion and improves financial services; (5) labor economics, analyzing AI’s impact on labor market dynamics; and (6) the risks and challenges associated with AI in financial market. We conclude by identifying unanswered questions and discussing promising avenues for future research.