-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
terminate called after throwing an instance of 'c10::Error' #3
Comments
Hello @geekzyn! You are not using GPU, but you specify Either way, I would recommend you to get at least a single, modest GPU on your VM and try to rerun the code. |
I also encountered the same problem, but my hardware has a GPU. The Trainer Main Error Message: ================================================================================ terminate called after throwing an instance of 'c10::Error' |
I have 2 GPU! but I have same problem |
Hi @dtransposed . I followed the toturial and encountered the same problem. And I found the reason was that in 'pip install -r requirements.txt' the pytorch version for ROCm was installed. But my gpu is Intel HD630 which is not supported by pytorch and is not an AMD card. I first tried to run with CUDA_VISIBLE_DEVICES=“” ( also tried =-1) to use CPU only. But it seemed that the trainer part cannot run with CPU.(?) So I tired to uninstall pytorch in the virtualenv and then install torch==1.3.1+cpu. I also tried to modified the requirements.txt to install torch==1.3.1+cpu directly when running run-training.sh. But they did not work either. My question is that is there any way to run the training demo sucessfully in my device (Intel HD630)(using only CPU or some other ways). Besides, I have another problem. The trainer thread creates a vrep window which does not display anything. I suppose it is not normal right? Thank you for your time. And I'm looking forward to your reply! |
We got this error as well, when was using the torch Dataloader in combination with cuda. I checked my OOM, and there was sufficient memory to perform the training. Setting num=workers to 1 did not help. Training on CPU did not help either. Eventually, I found out that this error was caused because some of the data files I was trying to import with Dataloader were corrupted. The corrupted files caused this error, because they were incompatible with the torch.to_Tensor() function. This was not mentioned in the initial error message, I found it by setting the num_workers to 0. The error message then changed and included 'CORRUPTED file'. After removing the corrupted data files, everything ran smoothly with my initial num_workers=4. |
0%| | 0/15721 [00:00<?, ?it/s] |
i have gpu on my VM and done with extraction correctly but the training have this can anyone please help how to tackle this issue? |
hello,I have one gpu and i have same problem,have you solved it? thanks |
Hey there, I can successfully run the project but the training doesn't start, when's sampling is done the second Coppeliasim thread closes, what makes me assume its a parallelism issue. Great work btw!
Environment: Azure VM, Ubuntu 18.04.5 LTS (no GPU)
The text was updated successfully, but these errors were encountered: