New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pretrained-model loading with errors #29
Comments
I just tried. Everything seems fine. Both on cpu and gpu inference.
|
thx. I can update torch from 1.8 to 1.11.0 and try again |
would kindly like to ask if requirements.txt can be update to your current environment setting please? |
all problems are solved with pytorch 1.11.0, many thanks, plz close this issue |
How can you create |
In deep-text-recognition-benchmark/test.py Line 237 in ea0d077
It is triggered by |
Thank you so much! My problem is solved |
Should probably update requirements.txt to torch=1.1.0 |
Hello,
I used single GPU env with python == 3.8, torch==1.8.1 and torchvision==0.9.1
I followed the github hint with the following command:
It returned an error with
it seems that the function model = torch.load(checkpoint) in infer.py returns an ordered dict instead of the model object.
One way to solve the problem is:
But I do not know the hyper params of vitstr_small_patch16_224.pth when it is training. so it is very hard form me to initialize the model object with correct hyper params.
I would like to ask would it possible to may the hyper params of the pretrained models public?
I also tried the pt models
it gives the following error:
any way to load the model correctly please?
may thanks
The text was updated successfully, but these errors were encountered: