Surprised to kill: quantifying LLM uncertainty in morally-charged triadic dialogues
- Abstract:
- Multi-party dialogues on ethically and socially challenging (morally charged) topics pose a challenge for large language models (LLMs) trained on massive text corpora. Nevertheless, LLMs can illuminate features of interaction in such dialogues and serve as evaluation proxies. We propose using LLMsurprisal as an indicator of points in dialogue which address or relate to the discussion of social norms on a corpus of triadic text conversations from the Balloon Task, in which three participants collaboratively resolve a moral dilemma. We hypothesise that (1) turns featuring indirect reference and implicit moral justification will exhibit higher surprisal than turns with direct reference or explicit justification, and (2) including dialogueact or reference-type annotations in the prompt will reduce model uncertainty. By presenting our planned experiments, we aim to inform the design of socially aware dialogue systems able to reliably interpret nuanced ethical discourse.
- Research areas:
- Year:
- 2025
- Type of Publication:
- In Proceedings
- Book title:
- Proceedings of the 29th Workshop on the Semantics and Pragmatics of Dialogue - Poster Abstract.
Hits: 72