Improving speech recognition performance via phone-dependent VQ codebooks and adaptive language models in SPHINX-II

  • Mei-Yuh Hwang ,
  • R. Rosenfeld ,
  • E. Theyer ,
  • Ravi Mosur ,
  • Lin Lawrence Chase ,
  • Robert Weide ,
  • X. Huang ,
  • Fil Alleva

1994 IEEE International Conference on Acoustics, Speech, and Signal Processing, 1994. ICASSP-94. |

Publication

This paper presents improvements in acoustic and language modeling for automatic speech recognition. Specifically, semi-continuous HMMs (SCHMMs) with phone-dependent VQ codebooks are presented and incorporated into the SPHINX-II speech recognition system. The phone-dependent VQ codebooks relax the density-tying constraint in SCHMMs in order to obtain more detailed models. A 6% error rate reduction was achieved on the speaker-independent 20000-word Wall Street Journal (WSJ) task. Dynamic adaptation of the language model in the context of long documents is also explored. A maximum entropy framework is used to exploit long distance trigrams and trigger effects. A 10%-15% word error rate reduction is reported on the same WSJ task using the adaptive language modeling technique.