Replies: 5 comments
-
Hi, I am not sure that we have such a functionality. We decided not to add it because it simply is linked to the cross-entropy observed on the model. |
Beta Was this translation helpful? Give feedback.
-
Hi, Yes, that's right, it is related to the cross-entropy. However, in case I just have two pre-trained LM models, how could I easily compare them? I guess your point is that I could use the evaluate function used in the speechbrain/recipes/LibriSpeech/LM/train.py Line 209 in a9b373a and compare their test loss, is that right? Thanks for your help. Pablo Peso. |
Beta Was this translation helpful? Give feedback.
-
Hi that's right ! Or you can develop a function to obtain the perplexity and we could add it to the toolkit as well. |
Beta Was this translation helpful? Give feedback.
-
Computing the perplexity should be straight forward from the current loss defined in the config file ( On the other hand, what would be the easiest way to load a checkpoint into the lm_brain object (created from LM class inherit from sb.core.Brain class)? I was checking the Thanks |
Beta Was this translation helpful? Give feedback.
-
If it's to evaluate the model (call the evaluate on it) then you need to use the standard training recipe + checkpointer. The CKPT.yaml is given on the GDRive folder: https://github.com/speechbrain/speechbrain/tree/develop/recipes/LibriSpeech/ASR/transformer HuggingFace is only here for inference :-) |
Beta Was this translation helpful? Give feedback.
-
Hi,
I was wondering if there are Speech Brain built-in tools to compute the perplexity of a LM model?
I can see there is a
ngram_perplexity()
function but I am not sure how use it, specially with the transformer-based LM.Thanks,
Pablo Peso.
Beta Was this translation helpful? Give feedback.
All reactions