AI Heap
Published on

Deep CLAS: Deep Contextual Listen, Attend and Spell

arXiv:2409.17603 - [arXiv,PDF]
Authors
  • Name
    Mengzhi Wang
  • Name
    Shifu Xiong
  • Name
    Genshun Wan
  • Name
    Hang Chen
  • Name
    Jianqing Gao
  • Name
    Lirong Dai
  • Affiliation
    Department of Physics, University of Science and Technology
  • Affiliation
    Department of Chemistry, University of Science and Technology
  • Affiliation
    Department of Mathematics, University of Science and Technology
  • Affiliation
    Department of Computer Science, University of Science and Technology
  • Affiliation
    Department of Biology, University of Science and Technology
  • Affiliation
    Department of Environmental Science, University of Science and Technology
Contextual-LAS (CLAS) has been shown effective in improving Automatic Speech Recognition (ASR) of rare words. It relies on phrase-level contextual modeling and attention-based relevance scoring without explicit contextual constraint which lead to insufficient use of contextual information. In this work, we propose deep CLAS to use contextual information better. We introduce bias loss forcing model to focus on contextual information. The query of bias attention is also enriched to improve the accuracy of the bias attention score. To get fine-grained contextual information, we replace phrase-level encoding with character-level encoding and encode contextual information with conformer rather than LSTM. Moreover, we directly use the bias attention score to correct the output probability distribution of the model. Experiments using the public AISHELL-1 and AISHELL-NER. On AISHELL-1, compared to CLAS baselines, deep CLAS obtains a 65.78% relative recall and a 53.49% relative F1-score increase in the named entity recognition scene.