www.christinehowes.com

Informing Design and Research Concerning Conversationally Explainable AI Systems by Collecting and Distilling Human Explanatory Dialogues

Abstract:
Research into conversationally explainable artificial intelligence (CXAI) aims to emulate the interactive and co-constructive nature of explanations. From the perspective of human-centredness, previous work has shown that AI users prefer conversational explanations over static ones. Various approaches for modelling and implementing CXAI solutions have also been proposed. However, as for concrete dialogue capabilities possessed by such systems, previous approaches have not been properly grounded in analogous dialogue patterns in human–human interaction. The present study bridges this gap in previous work by experimentally collecting human dialogues revolving around AI predictions concerning personality estimation. By distilling the collected interactions into the kind of interactions that would occur if the explainer was a dialogue system, the study identifies dialogue strategies which might be important for CXAI to support. The study reveals that some of the observed strategies—explaining predictions with reference to general rules or patterns and signalling presupposition violations in questions raised by explainees—have received very limited attention in previous work on CXAI. Overall, the study contributes a methodology for empirically identifying CXAI desiderata in human dialogues as well as concrete results with implications for future work.
Research areas:
Year:
2026
Type of Publication:
Article
Journal:
Information
Volume:
17
Number:
2
Pages:
123
DOI:
https://doi.org/10.3390/info17020123
Hits: 10