When breakdowns occur during a human-chatbot conversation, the lack of transparency and the "black-box" nature of task-oriented chatbots can make it difficult for end users to understand what went wrong and why. Inspired by recent HCI research on explainable AI solutions, we explored the design space of explainable chatbot interfaces through ChatrEx. We followed the iterative design and prototyping approach and designed two novel in-application chatbot interfaces (ChatrEx-VINC and ChatrEx-VST) that provide visual example-based step-by-step explanations about the underlying working of a chatbot during a breakdown. ChatrEx-VINC provides visual example-based step-by-step explanations in-context of the chat window whereas ChatrEx-VST provides explanations as a visual tour overlaid on the application interface. Our formative study with 11 participants elicited informal user feedback to help us iterate on our design ideas at each of the design and ideation phases and we implemented our final designs as web-based interactive chatbots for complex spreadsheet tasks. We conducted an observational study with 14 participants to compare our designs with current state-of-the-art chatbot interfaces and assessed their strengths and weaknesses. We found that visual explanations in both ChatrEx-VINC and ChatrEx-VST enhanced users' understanding of the reasons for a conversational breakdown and improved users' perceptions of usefulness, transparency, and trust. We identify several opportunities for future HCI research to exploit explainable chatbot interfaces and better support human-chatbot interaction.
Copyright is held by the author(s).
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Supervisor or Senior Supervisor
Thesis advisor: Chilana, Parmit
Member of collection