-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
running code without CUDA #26
Comments
Thank you for your interest in this work. It should be possible to run on CPU, although it might be quite slow. The calls that you have made i.e. remove net.cuda() and loading onto the cpu would have been my suggestion, but they appear not to have worked for you. Can you provide more detail please on the batch size 0 issue? Does this mean you getting back no data from the data loader? |
Sure, The error is that one: |
I don't think this is an issue with running on the CPU. I believe it is to do with the structure of the inference directory. Can you please show the directory structure you have. |
Thanks, can you also share the command you enter at the command line? |
python3 main.py --inference_img_dirpath=./adobe5k_dpe/ --checkpoint_filepath=./pretrained_models/adobe_dpe/curl_validpsnr_23.073045286204017_validloss_0.0701291635632515_testpsnr_23.584083321292365_testloss_0.061363041400909424_epoch_510_model.pt the difference with your suggestion is :/pretrained_models/adobe_dpe, because from git clone there are not the models inside pretrained_models |
Got it, so I suspect I know the problem here. The code will look in curl_example_test_input for the images. It does this as in line 298 of data.py it is looking for a directory with "input" in the name - see here. The images listed in images_inference.txt are not those in curl_example_test_input. To get this to work you should add into images_inference.txt those image file names that are in curl_example_test_input, removing the extensions. |
Good, I will try you suggestions, thank you so much. Another questions, please. From you point of view, is it possible to use your model and apply them with a cpp implementation? Do you know if could be some issue moving from python to cpp for your implementation? |
Change this to img_id = file.split(".")[0] |
What is the contents of img_filepath? Try and print it out to debug: print(img_filepath) |
input_img_filepath has no root directory or path attached to it, so the loading function is unable to find the image |
Yes, my suggested approach is to compile the model to Onnx and that will allow you to inference it from a cpp application. More detail here. |
Hi, |
I will need a little more debug info from you to help. What are the contents of this? |
Closing due to inactivity |
Hi,
I am trying to perform the test of your great project with the default images, but I need to perform on CPU, is it possible? and if it is yes, How?
For now I removed net.cuda(), and I put cpu on checkpoint = torch.load(checkpoint_filepath, map_location='cpu'), but the result of batch-size is 0, what can be the problem?
Thank you so much for your support!
The text was updated successfully, but these errors were encountered: