Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

data_downloader.py is giving error. #3

Closed
himanshubeniwal opened this issue Sep 24, 2022 · 4 comments
Closed

data_downloader.py is giving error. #3

himanshubeniwal opened this issue Sep 24, 2022 · 4 comments

Comments

@himanshubeniwal
Copy link

While downloading the data, it is giving an error: python data_downloader.py
The error I am getting:

UnboundLocalError: local variable 'train_data' referenced before assignment

Also, if directly using the python main.py:
Then the following error is encountered:

RuntimeError: Expected one of cpu, cuda, mkldnn, opengl, opencl, ideep, hip, msnpu device type at start of device string: mps
@verazuo
Copy link
Owner

verazuo commented Sep 24, 2022

Hi himanshubeniwal,

I fixed the bug in data_downloader.py, please git pull to get the update and re-run it.

And for the RuntimeError in main.py, it is caused by the device setting.
As I am using MBP M1, I set the device as mps.
You can feel free to change it to cuda or cpu, depending on your device type.

If you encounter other errors, do not hesitate to let me know. :D

Best wishes,
Vera

@himanshubeniwal
Copy link
Author

Thanks @verazuo!
I have pulled the new code. Data has been downloaded.
But when I am running the python main.py --device cuda:1, here I am using cuda:1 to send it to the first cuda device. It is returning the following error:

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

@verazuo
Copy link
Owner

verazuo commented Sep 26, 2022

Hi,

This is because the model is on the GPU, but the data is on the CPU.
To solve it, you can send the data to the GPU.

batch_x = batch_x.to(device, non_blocking=True)

I updated a fixed version and tested it on my GPU server.
Would you please try it on your machine again?

Best,
Vera

@himanshubeniwal
Copy link
Author

himanshubeniwal commented Sep 28, 2022

This worked! Thanks! :)
Edit: I have updated the requirements for me with the latest versions. As there were dependencies issues. New versions works fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants