-
Notifications
You must be signed in to change notification settings - Fork 776
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can not load tokoenizer from_pretrained through http_proxy since 0.14.0 #1373
Comments
Indeed. Could you try with the latest release? Otherwise I'll have look at what I can do! |
Just try the version |
hi @ArthurZucker , |
Ah! Yeah most probably because now we use the hf-hub api to load files, so if proxy is an issue there, will affect us. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
hi @ArthurZucker , |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
Hi hf,
I encountered an issue where I couldn't load the tokenizer using from_pretrained via the http_proxy in version 0.14.0, while it worked successfully in version 0.13.3.
This caused the fast tokenizer initialization issue in TGI 1.1.0.
huggingface/text-generation-inference#1108
Here is the code snippet that I use to test for testing.
Error output
I suspect that this is related to the client refactoring in here
Thanks and appreciate for any help from you!
The text was updated successfully, but these errors were encountered: