Now showing 1 - 2 of 2
  • Publication
    The Dimensions and Adaptation of Partner Models in Human-Machine Dialogue
    (University College Dublin. School of Information and Communication Studies, 2022) ;
    While speech interfaces have been widely adopted in recent years it appears people tend to use them in a limited fashion, rarely exploring their full functionality. Research suggests this is largely due to systems being perceived as having limited communicative abilities. Yet, progress in developing a nuanced understanding of user perceptions in human-machine dialogue (HMD) interaction have been hampered by a lack of theory building and measure development. This gap is addressed here through a series of studies aimed at developing a theoretical concept known as partner modelling. A partner model is a cognitive representation of a dialogue partner that reflects their perceived communicative ability and social relevance. Research suggests partner models influence interaction behaviour in both human-machine and human-human dialogue (HHD). However, the concept is currently under-defined and theoretical accounts contain a number of untested assumptions, including: the idea that people adapt their partner models to reflect interaction experiences, that this adaptation requires substantial cognitive resources, and that adaptation may be reflected in a language phenomena known as lexical alignment. Work presented here investigates the dimensionality and adaptation of partner models in the context of HMD interaction. Study 1 uses a mixed-method approach to explore the breadth of terms people use to define the communicative ability of speech interface technologies. In Study 2 this data was complimented by a review of relevant literature and used to develop a standardized subjective measure of partner models in the context of HMD. The scale highlights three salient dimensions of partner models in HMD: perceived communicative competence and dependability; perceived human-likeness in communication; and perceived communicative flexibility. Study 3 then looked to validate its factor structure, convergent and divergent validity, and test-retest reliability. Finally, Study 4 uses the measure to examine the impact of interaction experience (i.e., system errors) on partner models, lexical alignment and cognitive load. Findings suggest experiencing system errors in interaction can provoke adaptation of partner models, that this does not require substantial cognitive resources and that it is not reflected in lexical alignment behaviour. The implications of these findings and limitations of the work are then discussed in the closing sections.
  • Publication
    Transparency in Language Generation: Levels of Automation
    Language models and conversational systems are growing increasingly advanced, creating outputs that may be mistaken for humans. Consumers may thus be misled by advertising, media reports, or vagueness regarding the role of automation in the production of language. We propose a taxonomy of language automation, based on the SAE levels of driving automation, to establish a shared set of terms for describing automated language. It is our hope that the proposed taxonomy can increase transparency in this rapidly advancing field.
      13ScopusĀ© Citations 4