Options
Improving speech recognition on a mobile robot platform through the use of top-down visual queues
Date Issued
2003-08-09
Date Available
2013-08-14T12:11:10Z
Abstract
In many real-world environments, Automatic Speech Recognition (ASR) technologies fail to provide adequate performance for applications such as human robot dialog. Despite substantial evidence that speech recognition in humans is performed in a top-down as well as bottom-up manner, ASR systems typically fail to capitalize on this, instead relying on a purely statistical, bottom up methodology. In this paper we advocate the use of a knowledge based approach to improving ASR in domains such as mobile robotics. A simple implementation is presented, which uses the visual recognition of objects in a robot's environment to increase the probability that words and sentences related to these objects will be recognized.
Type of Material
Conference Publication
Publisher
IJCAI Workshop
Language
English
Status of Item
Peer reviewed
Journal
Proceedings of 18th International Joint Conference on Artificial Intelligence (IJCAI-03), 9th-15th August, Acapulco, Mexico
Conference Details
The 18th International Joint Conference on Artificial Intelligence (IJCAI-03), Acapulco, Mexico, 9-15 August 2003
This item is made available under a Creative Commons License
File(s)
Loading...
Name
P181-Ross,O¿Donoghue,O¿Hare-03.pdf
Size
12.14 KB
Format
Adobe PDF
Checksum (MD5)
ff5319e9f96ff48ade4a3390b1b47119
Owning collection