Theirs not to reason why: Dialogical reasoning for conversational artificial agents.
- Conversational artificial intelligence (AI) systems are notoriously bad at conversing with humans in a natural way. One of the major reasons for this is that interacting with others frequently involves making common-sense inferences linking context, background knowledge and beliefs to utterances in the dialogue. These inferences are often enthymematic, that is, the premises given do not by necessity lead to the conclusion. We present an approach to dialogue modelling where enthymemes play a key role for interpretation and production of conversational moves, and suggest how these may be formalised. This work is intended to provide a basis for building useful context-aware dialogue agents.
- Research areas:
- Type of Publication:
- In Proceedings
- Book title:
- Do Robots Talk? Philosophical Implications of Describing Human-Machine Communication (DoRoTa)