Options
Improving speech recognition on a mobile robot platform through the use of top-down visual queues
Date Issued
2003-08-09
Date Available
2013-08-14T12:11:10Z
Abstract
In many real-world environments, Automatic Speech Recognition (ASR) technologies fail to provide adequate performance for applications such as human robot dialog. Despite substantial evidence that speech recognition in humans is performed in a top-down as well as bottom-up manner, ASR systems typically fail to capitalize on this, instead relying on a purely statistical, bottom up methodology. In this paper we advocate the use of a knowledge based approach to improving ASR in domains such as mobile robotics. A simple implementation is presented, which uses the visual recognition of objects in a robot's environment to increase the probability that words and sentences related to these objects will be recognized.
Type of Material
Conference Publication
Publisher
IJCAI Workshop
Language
English
Status of Item
Peer reviewed
Part of
Proceedings of 18th International Joint Conference on Artificial Intelligence (IJCAI-03), 9th-15th August, Acapulco, Mexico
Conference Details
The 18th International Joint Conference on Artificial Intelligence (IJCAI-03), Acapulco, Mexico, 9-15 August 2003
This item is made available under a Creative Commons License
File(s)
Owning collection
Views
1386
Acquisition Date
Mar 28, 2024
Mar 28, 2024
Downloads
280
Last Week
2
2
Last Month
7
7
Acquisition Date
Mar 28, 2024
Mar 28, 2024