-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model "t5-base" on the Hub doesn't have a tokenizer #20
Comments
it turns out to be a network blocking problem and i solved it by using a proxy. Then i tried the command again and the program continued its execution until failed on the following error: **Traceback (most recent call last): am i using the wrong pointcheck file? or it doesn't match the code? Thanks |
Yes, I also encountered the same problem. Using other models, similar errors also occur. |
in the file init.py in the vima folder, i change line 13 from: |
Thank you very much, I can successfully run it using the method you provided. Before that, I changed the init.py in the vima folder to the following: from .policy import * def create_policy_from_ckpt(ckpt_path, device):
This method can also run the demo. But obviously this way is not elegant enough. Thanks again. |
you are welcome. Closing it, hope it will help people encountering the same problem. |
Hi @lqiang2003cn and @liuqinglong110 , Thanks for your interest in our project and brining this up to our attention. It turns out that HF Transformers recently changed casual mask in attention block to non-persistent parameter, which means they won't be included in the state dict. For compatibility I just forced to include them in the state dict per this commit d165e53. Feel free to let me know if there are further questions. Thanks. |
I use the function Tokenizer.from_file('tokenizer.json') instead of Tokenizer.from_pretrained('t5-base') to solve the problem, where tokenizer.json is the tokenzier config of the "t5-base" model which is downloaded in the huggingface of t5-base model. |
hi, thanks for sharing this work.
i followed the instructions and built vima and vima bench successfully. but when i ran command like this:
python3 scripts/example.py --ckpt=2M.ckpt --device=cuda --partition=novel_object_generalization --task=pick_in_order_then_restore
i got the following errors:
pybullet build time: May 20 2022 19:45:31
[INFO] 17 tasks loaded
[2023-07-26T12:45:49Z ERROR cached_path::cache] ETAG fetch for https://huggingface.co/t5-base/resolve/main/tokenizer.json failed with fatal error
Traceback (most recent call last):
File "/home/lq/ws_vima/VIMA/scripts/example.py", line 74, in
tokenizer = Tokenizer.from_pretrained("t5-base")
Exception: Model "t5-base" on the Hub doesn't have a tokenizer
any ideas? thanks in advance.
The text was updated successfully, but these errors were encountered: