Search papers, labs, and topics across Lattice.
This paper investigates the impact of post-training alignment on language models' ability to predict human behavior in strategic games. By comparing 120 base-aligned model pairs on over 10,000 real human decisions, the authors find that base models are significantly better at predicting human choices in multi-round strategic games, while aligned models excel in one-shot normative games. This suggests that alignment induces a normative bias, improving performance in settings where human behavior aligns with normative solutions but hindering it where descriptive dynamics dominate.
Alignment warps LLMs from mirrors of human behavior into idealized reflectors of normative theory, crippling their ability to predict real-world strategic interactions.
Post-training alignment optimizes language models to match human preference signals, but this objective is not equivalent to modeling observed human behavior. We compare 120 base-aligned model pairs on more than 10,000 real human decisions in multi-round strategic games - bargaining, persuasion, negotiation, and repeated matrix games. In these settings, base models outperform their aligned counterparts in predicting human choices by nearly 10:1, robustly across model families, prompt formulations, and game configurations. This pattern reverses, however, in settings where human behavior is more likely to follow normative predictions: aligned models dominate on one-shot textbook games across all 12 types tested and on non-strategic lottery choices - and even within the multi-round games themselves, at round one, before interaction history develops. This boundary-condition pattern suggests that alignment induces a normative bias: it improves prediction when human behavior is relatively well captured by normative solutions, but hurts prediction in multi-round strategic settings, where behavior is shaped by descriptive dynamics such as reciprocity, retaliation, and history-dependent adaptation. These results reveal a fundamental trade-off between optimizing models for human use and using them as proxies for human behavior.