Computational Storytelling as an Embodied Robot Performance with Gesture and Spatial Metaphor

Files in This Item:
 File SizeFormat
Download8139351.pdf15.44 MBAdobe PDF
Title: Computational Storytelling as an Embodied Robot Performance with Gesture and Spatial Metaphor
Authors: Wicke, Philipp
Permanent link: http://hdl.handle.net/10197/12891
Date: 2021
Online since: 2022-06-02T12:01:00Z
Abstract: A story comes to life when it is turned into a performance. Computational approaches to storytelling have primarily focused on stories as textual artifacts and not as performances. But stories can become much more when they are augmented with actors, dialogue, movements and gestures. Where artificial intelligence research has previously investigated these individual layers, this thesis presents an overarching framework of computational storytelling as an embodied robot performance with a focus on gesture and spatial metaphor. This work regards storytelling as a performative act, one that combines linguistic (spoken) and physical (embodied) actions to communicate concepts from performer to audience. The performances can feature multiple robotic agents that distribute the different storytelling tasks across themselves. The robots narrate the story, move across the stage, use appropriate gestures, interpret the actions of the story, present dialogue or give the audience an opportunity to interact with verbal or non-verbal cues, while an underlying system provides the story in an act of computational creativity. The performances are used to evaluate the links between concepts, words and embodied actions. In particular, the robots connect two movement types with the underlying plot: Gestures to enhance theatricality, and spatial movements to mirror character relations in the plot. For both types, we present a comprehensive taxonomy of robotic movement. Moreover, we argue that image schemas play a profound role in the understanding of movement and that, based on this claim, the coherent use of schematic movement is beneficial for our performances and for researchers in the field of robotic performances. To test these claims, the thesis outlines the Sc√©alability framework for turning generated stories into performances, which are then evaluated in a series of studies. In particular, we show that audiences are sensitive to the coherent use of space, and appreciate the schematic use of spatial movements as much as gestures.
Type of material: Doctoral Thesis
Publisher: University College Dublin. School of Computer Science
Qualification Name: Ph.D.
Copyright (published version): 2021 the Author
Keywords: Computational creativityAutomated storytellingStory-generationGesture
Language: en
Status of Item: Peer reviewed
This item is made available under a Creative Commons License: https://creativecommons.org/licenses/by-nc-nd/3.0/ie/
Appears in Collections:Computer Science Theses

Show full item record

Page view(s)

70
Last Week
20
Last month
checked on Jun 26, 2022

Download(s)

20
checked on Jun 26, 2022

Google ScholarTM

Check


If you are a publisher or author and have copyright concerns for any item, please email research.repository@ucd.ie and the item will be withdrawn immediately. The author or person responsible for depositing the article will be contacted within one business day.