Artificial IntelligencePeer-Certified

AI as Decision-Maker: Ethics and Risk Preferences of Large Language Models (Updated)

February 3, 2026
Cite as: OxSci:390001

Abstract

Large Language Models (LLMs) exhibit diverse and stable risk preferences in economic decision tasks, yet the drivers of this variation are unclear. Studying 50 LLMs, we show that alignment tuning for harmlessness, helpfulness and honesty systematically increases risk aversion. A ten percent increase in ethics scores reduces risk appetite by two to eight percent. This induced caution persists against prompts and affects economic forecasts. Alignment therefore promotes safety but can dampen valuable risk taking, revealing a tradeoff risking suboptimal economic outcomes. Our framework provides an adaptable and enduring benchmark for tracking model risk preferences and this emerging tradeoff.

Authors

Shumiao Ouyang*, Hayong Yun, Xingjian Zheng
* Corresponding authorSubmitted by:S O

Actions

Metrics

Views57
Downloads3

Version History

Current VersionLatest
February 6, 2026

Changes in this version:

  • + Title updated
  • + Abstract revised
  • + References updated
Original Submission
February 3, 2026

Publication Info

Submitted:
February 3, 2026
Last Updated:
February 6, 2026
Status:
Peer-Certified
Field:
Artificial Intelligence