Hi, I use the pretrained Quartznet checkpoint and adapt it to custom data. After some training, the model reaches around 27% WER. Then, when I use KenLM for beam search rescoring, the WER increases by approx. 5%. I tried the KenLM pretrained on LibriSpeech as well as KenLM trained on my data. The former was slightly worse than the latter, but both increased WER. WHat could be the possible causes? I checked if the models use the same alphabet etc., but could not find a reasonable answer.
Hi, I use the pretrained Quartznet checkpoint and adapt it to custom data. After some training, the model reaches around 27% WER. Then, when I use KenLM for beam search rescoring, the WER increases by approx. 5%. I tried the KenLM pretrained on LibriSpeech as well as KenLM trained on my data. The former was slightly worse than the latter, but both increased WER. WHat could be the possible causes? I checked if the models use the same alphabet etc., but could not find a reasonable answer.