Ruane, ElayneElayneRuaneFaure, ThéoThéoFaureSmith, RossRossSmithBean, DanDanBeanCarson-Berndsen, JulieJulieCarson-BerndsenVentresque, AnthonyAnthonyVentresque2018-04-092018-04-092018 ACM2018-03-11978-1-4503-5571-1/18/03http://hdl.handle.net/10197/9304ACM IUI (Intelligent User Interfaces), Tokyo, Japan, 07-11 March 2018Quality of conversational agents is important as users have high expectations. Consequently, poor interactions may lead to the user abandoning the system. In this paper, we propose a framework to test the quality of conversational agents. Our solution transforms working input that the conversational agent accurately recognises to generate divergent input examples that introduce complexity and stress the agent. As the divergent inputs are based on known utterances for which we have the 'normal' outputs, we can assess how robust the conversational agent is to variations in the input. To demonstrate our framework we built ChitChatBot, a simple conversational agent capable of making casual conversation.en© ACM, 2018. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in IUI'18 Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion, (2018) http://doi.acm.org/10.1145/3180308.3180373Conversational agent testingConversational agent quality assessmentChatbotBoTest: a Framework to Test the Quality of Conversational Agents Using Divergent Input ExamplesConference Publication10.1145/3180308.31803732018-03-01https://creativecommons.org/licenses/by-nc-nd/3.0/ie/