New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when I try to do the inference #15
Comments
Hi, @JoseMoFi , |
I use WSL 2, could it be the problem? And thank you for the help! |
I'm not familar with WSL 2, all experiments are conducted on ubuntu. Can WSL 2 detect the GPU device? |
Yes, WSL 2 can detect the GPU device. However, I think the problem should be WSL 2 because I had similar error in other repo when I was training and now I test again but in W10 and it work, so... I'll do more test, but it is very probable who the problem must be WSL 2 or some config. |
@JoseMoFi I suggest you go straight install |
Ok, I am secure that the problem was WSL 2. However, I don't know if it's because I have bad config CUDA or if WSL can't work with the graphic card. But I use other code that neither work in WSL but it can work on server with Ubuntu. So I can say thay my problem is caused by WSL. |
Hello, I'm replicating this model but when I execute the command for do the inferece an unknowns error appears. However, I don't know why I have this error.
My setup it's:
The complete error is:
And I have change the config file:
-batch_size: 2
+batch_size: 1
-test_batch_size: 8
-num_worker: 10
-device: 0,1,2
+test_batch_size: 1
+num_worker: 1
+device: 0
Also my torch version its
1.8.1+cu111
Thank you for the help!
UPDATE
Also i found this error:
whit the next config
-batch_size: 2
+batch_size: 1
random_seed: 0
-test_batch_size: 8
-num_worker: 10
-device: 0,1,2
+test_batch_size: 2
+num_worker: 2
+device: 0
The text was updated successfully, but these errors were encountered: