Extended low-rank plus diagonal adaptation for deep and recurrent neural networks

ICASSP |

Published by IEEE

Recently, the low-rank plus diagonal (LRPD) adaptation was proposed for speaker adaptation of deep neural network (DNN) models. The LRPD restructures the adaptation matrix as a superposition of a diagonal matrix and a product of two low-rank matrices. In this paper, we extend the LRPD adaptation into the subspace-based approach to further reduce the speaker-dependent (SD) footprint. We apply the extended LRPD (eLRPD) adaptation for the DNN and LSTM models with emphasis placed on the applicability of the adaptation to large-scale speech recognition systems. To speed up the adaptation in test time, we propose the bottleneck (BN) caching approach to eliminate the redundant computations during multiple sweeps of development data. Experimental results on the short message dictation (SMD) task show that the eLRPD adaptation can reduce the SD footprints by 82\% for the SVD DNN and 96\% for the LSTM-RNN over the linear adaptation, while maintaining the comparable accuracy. The BN caching achieves up to 3.5 times speedup in adaptation at no loss of recognition accuracy.