Options
Assessing the Robustness of Conversational Agents using Paraphrases
Date Issued
2019-04-09
Date Available
2019-04-24T13:18:29Z
Abstract
Assessing a conversational agent’s understanding capabilities is critical, as poor user interactions could seal the agent’s fate at the very beginning of its lifecycle with users abandoning the system. In this paper we explore the use of paraphrases as a testing tool for conversational agents. Paraphrases, which are different ways of expressing the same intent, are generated based on known working input by per- forming lexical substitutions. As the expected outcome for this newly generated data is known, we can use it to assess the agent’s robustness to language variation and detect potential understanding weaknesses. As demonstrated by a case study, we obtain encouraging results as it appears that this approach can help anticipate potential understanding shortcomings and that these shortcomings can be addressed by the generated paraphrases.
Sponsorship
Science Foundation Ireland
Other Sponsorship
Microsoft Corporation
Type of Material
Conference Publication
Publisher
IEEE
Copyright (Published Version)
2019 IEEE
Web versions
Language
English
Status of Item
Peer reviewed
Conference Details
The 2019 IEEE International Conference on Artificial Intelligence Testing (AITest), San Francisco, United States of America, 4-9 April 2019
This item is made available under a Creative Commons License
File(s)
Loading...
Name
Artificial_Intelligence_Testing__2019_IEEE_AITest_ (12).pdf
Size
201.74 KB
Format
Adobe PDF
Checksum (MD5)
7c57daf2eb1f525deb7c788602527e53
Owning collection