Assessing the Robustness of Conversational Agents using Paraphrases

Files in This Item:
File Description SizeFormat 
Artificial_Intelligence_Testing__2019_IEEE_AITest_ (12).pdf201.74 kBAdobe PDFDownload
Title: Assessing the Robustness of Conversational Agents using Paraphrases
Authors: Guichard, Jonathan
Ruane, Elayne
Smith, Ross
Bean, Dan
Ventresque, Anthony
Permanent link:
Date: 9-Apr-2019
Online since: 2019-04-24T13:18:29Z
Abstract: Assessing a conversational agent’s understanding capabilities is critical, as poor user interactions could seal the agent’s fate at the very beginning of its lifecycle with users abandoning the system. In this paper we explore the use of paraphrases as a testing tool for conversational agents. Paraphrases, which are different ways of expressing the same intent, are generated based on known working input by per- forming lexical substitutions. As the expected outcome for this newly generated data is known, we can use it to assess the agent’s robustness to language variation and detect potential understanding weaknesses. As demonstrated by a case study, we obtain encouraging results as it appears that this approach can help anticipate potential understanding shortcomings and that these shortcomings can be addressed by the generated paraphrases.
Funding Details: Science Foundation Ireland
Type of material: Conference Publication
Publisher: IEEE
Start page: 55
End page: 62
Copyright (published version): 2019 IEEE
DOI: 10.1109/AITest.2019.000-7
Language: en
Status of Item: Peer reviewed
Conference Details: San Francisco, USA
Appears in Collections:Computer Science Research Collection

Show full item record

Google ScholarTM



This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. For other possible restrictions on use please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply.