-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Help] Using Pretrained Model #16
Comments
Hi, That should be it. |
I don't have cuda compatible GPU... I'm trying to run the model on MacBook Pro 2018... How can I run the pertained model on CPU... I'm getting this runtime error.
|
It should work on CPU. Could you provide the exact command you used and also the complete output of the script? |
I ran the following command (Pretrained model files are in model folder in project's root directory) python run_eval_on_all_datasets.py config.cfg 0 -b 1 --snapshot-dir model --render And the Error I got is this: Testing cute80
stripping non alpha
Traceback (most recent call last):
File "evaluate.py", line 310, in <module>
evaluator = Evaluator(args)
File "evaluate.py", line 105, in __init__
self.localizer.to_device(args.gpu)
File "/opt/anaconda3/envs/env/lib/python3.8/site-packages/chainer/device_resident.py", line 196, in to_device
device = chainer.get_device(device)
File "/opt/anaconda3/envs/env/lib/python3.8/site-packages/chainer/backend.py", line 149, in get_device
return _get_device_cupy_or_numpy(int_device_spec)
File "/opt/anaconda3/envs/env/lib/python3.8/site-packages/chainer/backend.py", line 188, in _get_device_cupy_or_numpy
return cuda.GpuDevice.from_device_id(device_spec)
File "/opt/anaconda3/envs/env/lib/python3.8/site-packages/chainer/backends/cuda.py", line 228, in from_device_id
check_cuda_available()
File "/opt/anaconda3/envs/env/lib/python3.8/site-packages/chainer/backends/cuda.py", line 142, in check_cuda_available
raise RuntimeError(msg)
RuntimeError: CUDA environment is not correctly set up
(see https://github.com/chainer/chainer#installation).No module named 'cupy'
Traceback (most recent call last):
File "run_eval_on_all_datasets.py", line 114, in <module>
subprocess.run([command, file] + process_args, check=True)
File "/opt/anaconda3/envs/env/lib/python3.8/subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['python', 'evaluate.py', '--gpu', '0', 'Eval_Datasets/CUTE80/gt.npz', 'model', 'LSTMTextLocalizer_', '--recognizer-name', 'TransformerTextRecognizer_', '--char-map', 'train_utils/char-map-bos.json', '--results-path', 'cute80_eval_results.json', '--dataset-name', 'cute80', '--strip-non-alpha', '--save-predictions', '--do-not-cut-bboxes', '--render-all-results', '-b', '1']' returned non-zero exit status 1. |
Ah yes I see. The script expects you to supply a GPU id (the argument following the config file). You supplied it with Just exchange this by this statement: |
It worked.... Thanks.... |
Thanks for sharing the model. I just want to test the pertained model that you provided. Do I still need to download the image data (SynthText/MjSynth) if I'm using the pretrained model? And If not then how can I get run the pertained model on testing datasets like cute80, idcars etc. I have already downloaded the datasets (cute80, idcar2013, idcar2015, iiit5k, svt, svtp) and their respective npz files. How can I run the evaluation on these datasets?
The text was updated successfully, but these errors were encountered: