Moral Preferences of LLMs Under Directed Contextual Influence
P. Blandfort, T. Karayil, U. Pawar, R. Graham, A. McKenzie, D. Krasheninnikov
Abstract
Moral benchmarks for LLMs typically use context-free prompts, implicitly assuming stable preferences. We study how directed contextual influences reshape decisions in trolley-problem-style moral triage settings and find that contextual influences often significantly shift decisions, baseline preferences are a poor predictor of directional steerability, influences can backfire, and reasoning reduces average sensitivity but amplifies the effect of biased few-shot examples.
Abstract
Moral benchmarks for LLMs typically use context-free prompts, implicitly assuming stable preferences. In deployment, however, prompts routinely include contextual signals such as user requests, cues on social norms, etc. that may steer decisions. We study how directed contextual influences reshape decisions in trolley-problem-style moral triage settings.
Approach
We introduce a pilot evaluation harness for directed contextual influence in trolley-problem-style moral triage: for each demographic factor, we apply matched, direction-flipped contextual influences that differ only in which group they favor, enabling systematic measurement of directional response.
Key Findings
We find that:
- Contextual influences often significantly shift decisions, even when only superficially relevant
- Baseline preferences are a poor predictor of directional steerability, as models can appear baseline-neutral yet exhibit systematic steerability asymmetry under influence
- Influences can backfire: models may explicitly claim neutrality or discount the contextual cue, yet their choices still shift, sometimes in the opposite direction
- Reasoning reduces average sensitivity, but amplifies the effect of biased few-shot examples
Conclusion
Our findings motivate extending moral evaluations with controlled, direction-flipped context manipulations to better characterize model behavior.