Now showing 1 - 3 of 3
  • Publication
    Assessing the Robustness of Conversational Agents using Paraphrases
    Assessing a conversational agent’s understanding capabilities is critical, as poor user interactions could seal the agent’s fate at the very beginning of its lifecycle with users abandoning the system. In this paper we explore the use of paraphrases as a testing tool for conversational agents. Paraphrases, which are different ways of expressing the same intent, are generated based on known working input by per- forming lexical substitutions. As the expected outcome for this newly generated data is known, we can use it to assess the agent’s robustness to language variation and detect potential understanding weaknesses. As demonstrated by a case study, we obtain encouraging results as it appears that this approach can help anticipate potential understanding shortcomings and that these shortcomings can be addressed by the generated paraphrases.
      447Scopus© Citations 12
  • Publication
    Towards a Gamified System to Improve Translation for Online Meetings
    Translation of online meetings (e.g., Skype conversations) is a useful feature that can help users to understand each other. However translations can sometimes be inaccurate or they can miss the context of the discussion. This is for instance the case in corporate environments where some words are used with special meanings that can be obscure to other people. This paper presents the prototype of a gamified application that aims at improving translations of and for online meetings. In our system, users play to earn points and rewards – and they try to propose and vote for the most accurate translations in context. Our system uses various techniques to split conversations in various semantically coherent segments and label them with relevant keyphrases. This is how we extract a description of the context of a sentence and we use this context to: (i) weight users' expertise and their translation (e.g., an AI specialist is more likely than a lay person to give a correct translation for a sentence about deep learning) (ii) map the various translations of words and phrases and their context, so that we can use them during online meetings.
  • Publication
    BoTest: a Framework to Test the Quality of Conversational Agents Using Divergent Input Examples
    Quality of conversational agents is important as users have high expectations. Consequently, poor interactions may lead to the user abandoning the system. In this paper, we propose a framework to test the quality of conversational agents. Our solution transforms working input that the conversational agent accurately recognises to generate divergent input examples that introduce complexity and stress the agent. As the divergent inputs are based on known utterances for which we have the 'normal' outputs, we can assess how robust the conversational agent is to variations in the input. To demonstrate our framework we built ChitChatBot, a simple conversational agent capable of making casual conversation.
      582Scopus© Citations 18