You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that when I ran the example for using ECAPATDNN's pretrained model after downloading the files, speechbrain will read the first two files from filesystem and then will start to request the other files from https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb
This behaviour may be seamless for most users, but if you do not have internet in an environment and just wants to load the files from filesystem you will not be able to do so with the from_params method.
I figured what was making speechbrain fallback for requests and I am opening a PR to fix this issue.
The text was updated successfully, but these errors were encountered:
There hyperparams.yaml explicitly specifies the location of the files. This overrides any default_source given to the Pretrainer. There is no need to explicitly specify the location, it would be more flexible if the paths weren't specified. @mravanelli can you remove those lines?
When I load local file,It occur OSError: [Errno 62] Too many levels of symbolic links: 'pretrained_models/EncoderClassifier-3077685261458028297/embedding_model.ckpt',
I noticed that when I ran the example for using ECAPATDNN's pretrained model after downloading the files, speechbrain will read the first two files from
filesystem
and then will start to request the other files from https://huggingface.co/speechbrain/spkrec-ecapa-voxcelebTo reproduce, run:
This behaviour may be seamless for most users, but if you do not have internet in an environment and just wants to load the files from filesystem you will not be able to do so with the
from_params
method.I figured what was making speechbrain fallback for requests and I am opening a PR to fix this issue.
The text was updated successfully, but these errors were encountered: