Replies: 1 comment
-
Hi @QRGomez, thanks for opening this interesting question. The key idea of pretraining lies in our # The pretrainer allows a mapping between pretrained files and instances that
# are declared in the yaml. E.g here, we will download the file lm.ckpt
# and it will be loaded into "lm" which is pointing to the <lm_model> defined
# before.
pretrainer: !new:speechbrain.utils.parameter_transfer.Pretrainer
collect_in: !ref <save_folder>
loadables:
lm: !ref <lm_model>
tokenizer: !ref <tokenizer>
paths:
lm: !ref <pretrained_lm_tokenizer_path>/lm.ckpt
tokenizer: !ref <pretrained_lm_tokenizer_path>/tokenizer.ckpt The In your python code, you'll only need to call these lines: # We download the pretrained models (depending on
# the path given in the YAML file).
run_on_main(hparams["pretrainer"].collect_files)
hparams["pretrainer"].load_collected() What you'll get is that your new wav2vec2 instance will be loaded with the actual weights of your pretrained model. Make sure that both, the loadable and the loader share the same architecture otherwise you'll be in trouble :) You should take a look at YAMLs that uses the pretrainer so that you have more working examples. Hope if helped a bit. Best. |
Beta Was this translation helpful? Give feedback.
-
I've been trying to understand the speechbrain's guide in finetuning, and still have not understand how to use and modify it. I want to save the fine tuned model after training. and load load it as an asr model later.
I also don't understand how to change the hyperparams for the speechbrain model. Is there a video, tutorial, or a more detailed guide on how to perform finetuning? I really need help and I'm so confused. Thank you
Beta Was this translation helpful? Give feedback.
All reactions