New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training one model on multiple GPUs simultaneously not working (possible deadlock?) #6
Comments
It seems weird to me. How many threads (workers) are you using ? What is your batch size ? Do you run one model per GPU ? Are you loading data from a dedicated SSD ? Did you try to debug with |
I'm using the default parameters ( CPU: 10-core Xeon I tried running one model with EDIT Apparently there's a Python bug that causes CTRL-C not to be registered: EDIT2 Running only one model with
|
Try running the models on separate gpus. There is no point to running them on 2 gpus if they share them |
@ilija139 There is a point. There's a disk bottleneck not a computational bottleneck. |
Exactly. That means there is no point to run one model on two gpus. There are only two reasons to run a model on multiple gpus.
In your case 1. Doesn't apply, and 2. Doesnt make sense if you are going to run two models on the same two gpus. It's always better to run them on one separate gpu. Running a model on multiple gpus comes at a cost. Btw there is no data loading bottleneck. You should store the image features as compressed numpy array. Hdf5 is not made for this use case. See MCB code on how you can store the features as compressed numpy. For some reason the caffe resnet model outputs more sparse image features that can be compressed really well. So vqa2 trainging set only takes 19gb and 9gb for val set. You can cache 28gb in 64gb ram easily. Even if you use visualgenome dataset since half of the vg images are from coco i.e. The same as vqa2. And finally the stack trace you give above says keyboard interrupt i.e. someone pressed ctrl+c. |
@ilija139 I am curious about what you've just said about hdf5. What is the use case of hdf5 in your point of view? Also a pull request to be able to use the numpy files from MCB code would be greatly appreciated :) Thanks |
@Cadene hdf5 = hierarchical data format, where is the hierarchy in this case? Also not sure how well the OS caches data read from hdf5 in RAM. If you use files to store the data, then you can either move them to tmpfs or let the OS cache them in RAM. I don't have time to properly modify your code to use the numpy features. But the modifications are trivial and you can see how I did it here: https://github.com/ilija139/vqa.pytorch/tree/numpy_features And for how to obtain the numpy features https://github.com/akirafukui/vqa-mcb/blob/master/preprocess/extract_resnet.py Note that, like I already said, for some reason the caffe resnet model outputs features that can be compressed a lot better than torch resnet features. The difference is about 2-3 times smaller files. |
@ilija139 It was not clear to me that caching the features in RAM/tmpfs was better than using hdf5 which is known to be designed for efficient I/O and for high volume. It seems that I was wrong. Unfortunately, I neither have time to add this feature for now. Thanks for your answer. |
@ilija139 derp, this is what happens when I don't sleep 🤦♂️ Thanks for that idea about caching the data, however, I don't have that much ram for this to work. @Cadene I tried running training for 2 models, this time with 1 model per GPU. And it works! No issues so far. So I think there's some kind of semaphore/locking mechanism causing both dataloaders to fight when GPUs are shared. I have no idea if this the dataloader's limitation or pytorch honestly. |
I never experimented such a thing with the multi-GPU setup. However, it was more efficient for me to run one experiment per GPU. I let the issue open just in case someone else encounter the same issue, but I will edit the title. |
@ahmedmagdiosman If the only thing you changed was to run the models on separate GPUs then this is definitely not dataloader's limitation. Also changing the number of workers didn't solve the problem. So it's definitely pytorch's, or more specifically CUDA's. It's a known problem when you run multiple processes on one GPU device. It just gets worse when you run two models on two gpus at the same time... And about the caching idea, it will still help if you use the compressed numpy features instead of hdf5, no matter the amount of RAM you have. 10 times fewer data to read so 10 times faster I/O. |
@ilija139 it seems there's already an issue on the pytorch page As for the data, actually I already have the compressed numpy features but I haven't gotten into integrating them with this project yet. I noticed that I didn't have this disk bottlenecking with the numpy features from MCB. Thanks you both for your comments! |
Hello again,
I was training MUTAN_att on VQA2+VG and I tried to run another process to train a second model (modified MUTAN) and both processes now seem to be stuck. I checked
iotop
and it seems disk reads also stopped. I verified that the modified MUTAN works when trained alone.I suspect that the dataloader process freaks out when another training process is launched. Is this possible? I assumed that the new training process would generate its own dataloader processes.
BONUS: I can't seem to kill all these processes in a graceful manner, CTRL-C doesn't work. Only
kill -9 PID
works but that seems to create zombie processes!Any help is appreciated!
The text was updated successfully, but these errors were encountered: