Speech recognition experiments with audiobooks

Under real-life conditions several factors may be present that make the automatic recognition of speech difficult. The most obvious examples are background noise, peculiarities of the speaker's voice, sloppy articulation and strong emotional load. These all pose difficult problems for robust sp...

Teljes leírás

Elmentve itt :
Bibliográfiai részletek
Szerzők: Tóth László
Tarján Balázs
Sárosi Gellért
Mihajlik Péter
Testületi szerző: Conference on Hungarian Computational Linguistics (7.) (2010) (Szeged)
Dokumentumtípus: Cikk
Megjelent: 2010
Sorozat:Acta cybernetica 19 No. 4
Kulcsszavak:Számítástechnika, Nyelvészet - számítógép alkalmazása
Tárgyszavak:
Online Access:http://acta.bibl.u-szeged.hu/12889
Leíró adatok
Tartalmi kivonat:Under real-life conditions several factors may be present that make the automatic recognition of speech difficult. The most obvious examples are background noise, peculiarities of the speaker's voice, sloppy articulation and strong emotional load. These all pose difficult problems for robust speech recognition, but it is not exactly clear how much each contributes to the difficulty of the task. In this paper we examine the abilities of our best recognition technologies under near-ideal conditions. The optimal conditions will be simulated by working with the sound material of an audiobook, in which most of the disturbing factors mentioned above are absent. Firstly pure phone recognition experiments will be performed, where neural net-based technologies will also be tried as well as the conventional Hidden Markov Models. Then we move on to large vocabulary recognition, where morphbased language models are applied to improve the performance of the standard word-based technology. The tests clearly justify our assertion that audiobooks pose a much easier recognition task than real-life databases. In both types of tasks we report the lowest error rates we have achieved so far in Hungarian continuous speech recognition.
Terjedelem/Fizikai jellemzők:695-713
ISSN:0324-721X