Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Help] Using Pretrained Model #16

Closed
haaks1998 opened this issue Jan 11, 2021 · 6 comments
Closed

[Help] Using Pretrained Model #16

haaks1998 opened this issue Jan 11, 2021 · 6 comments

Comments

@haaks1998
Copy link

haaks1998 commented Jan 11, 2021

Thanks for sharing the model. I just want to test the pertained model that you provided. Do I still need to download the image data (SynthText/MjSynth) if I'm using the pretrained model? And If not then how can I get run the pertained model on testing datasets like cute80, idcars etc. I have already downloaded the datasets (cute80, idcar2013, idcar2015, iiit5k, svt, svtp) and their respective npz files. How can I run the evaluation on these datasets?

@Bartzi
Copy link
Owner

Bartzi commented Jan 12, 2021

Hi,
First, you'll need to tell our code where your evaluation files are in the config file.
Then, need to unpack the model, place the files in any dir (e.g. trained_models/pre_trained) and then you can run the script run_eval_on_all_datasets.py, as described in the Evaluation section of our README.

That should be it.

@haaks1998
Copy link
Author

I don't have cuda compatible GPU... I'm trying to run the model on MacBook Pro 2018... How can I run the pertained model on CPU... I'm getting this runtime error.

RuntimeError: CUDA environment is not correctly set up
(see https://github.com/chainer/chainer#installation).No module named 'cupy'

@Bartzi
Copy link
Owner

Bartzi commented Jan 13, 2021

It should work on CPU. Could you provide the exact command you used and also the complete output of the script?

@haaks1998
Copy link
Author

haaks1998 commented Jan 14, 2021

I ran the following command (Pretrained model files are in model folder in project's root directory)

 python run_eval_on_all_datasets.py config.cfg 0 -b 1 --snapshot-dir model --render

And the Error I got is this:

Testing cute80
stripping non alpha
Traceback (most recent call last):
  File "evaluate.py", line 310, in <module>
    evaluator = Evaluator(args)
  File "evaluate.py", line 105, in __init__
    self.localizer.to_device(args.gpu)
  File "/opt/anaconda3/envs/env/lib/python3.8/site-packages/chainer/device_resident.py", line 196, in to_device
    device = chainer.get_device(device)
  File "/opt/anaconda3/envs/env/lib/python3.8/site-packages/chainer/backend.py", line 149, in get_device
    return _get_device_cupy_or_numpy(int_device_spec)
  File "/opt/anaconda3/envs/env/lib/python3.8/site-packages/chainer/backend.py", line 188, in _get_device_cupy_or_numpy
    return cuda.GpuDevice.from_device_id(device_spec)
  File "/opt/anaconda3/envs/env/lib/python3.8/site-packages/chainer/backends/cuda.py", line 228, in from_device_id
    check_cuda_available()
  File "/opt/anaconda3/envs/env/lib/python3.8/site-packages/chainer/backends/cuda.py", line 142, in check_cuda_available
    raise RuntimeError(msg)
RuntimeError: CUDA environment is not correctly set up
(see https://github.com/chainer/chainer#installation).No module named 'cupy'
Traceback (most recent call last):
  File "run_eval_on_all_datasets.py", line 114, in <module>
    subprocess.run([command, file] + process_args, check=True)
  File "/opt/anaconda3/envs/env/lib/python3.8/subprocess.py", line 512, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['python', 'evaluate.py', '--gpu', '0', 'Eval_Datasets/CUTE80/gt.npz', 'model', 'LSTMTextLocalizer_', '--recognizer-name', 'TransformerTextRecognizer_', '--char-map', 'train_utils/char-map-bos.json', '--results-path', 'cute80_eval_results.json', '--dataset-name', 'cute80', '--strip-non-alpha', '--save-predictions', '--do-not-cut-bboxes', '--render-all-results', '-b', '1']' returned non-zero exit status 1.

@Bartzi
Copy link
Owner

Bartzi commented Jan 14, 2021

Ah yes I see.

The script expects you to supply a GPU id (the argument following the config file). You supplied it with 0 so the script thinks, it should take the GPU with ID 0. If you want to use the CPU just supply -1 and it should work. If not, I might need to adapt the code, which is quite simply though.

Just exchange this by this statement: parser.add_argument("--gpu", default='cpu', help="GPU Id to use") and it should work.

@haaks1998
Copy link
Author

It worked.... Thanks....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants