Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training one model on multiple GPUs simultaneously not working (possible deadlock?) #6

Closed
ahmedmagdiosman opened this issue Aug 1, 2017 · 12 comments

Comments

@ahmedmagdiosman
Copy link

ahmedmagdiosman commented Aug 1, 2017

Hello again,

I was training MUTAN_att on VQA2+VG and I tried to run another process to train a second model (modified MUTAN) and both processes now seem to be stuck. I checked iotop and it seems disk reads also stopped. I verified that the modified MUTAN works when trained alone.

I suspect that the dataloader process freaks out when another training process is launched. Is this possible? I assumed that the new training process would generate its own dataloader processes.

BONUS: I can't seem to kill all these processes in a graceful manner, CTRL-C doesn't work. Only kill -9 PID works but that seems to create zombie processes!

Any help is appreciated!

@Cadene
Copy link
Owner

Cadene commented Aug 1, 2017

It seems weird to me.

How many threads (workers) are you using ? What is your batch size ? Do you run one model per GPU ? Are you loading data from a dedicated SSD ?

Did you try to debug with --workers 0 to be sure you receive the trace from the loading data functions ?

@ahmedmagdiosman
Copy link
Author

ahmedmagdiosman commented Aug 2, 2017

I'm using the default parameters (--workers 2, batch_size 128). I am running models on 2 GPUs CUDA_VISIBLE_DEVICES=0,2, i.e both models share the same 2 GPUs.
Data is split on a non-dedicated SSD (VQA) and a fast HDD (Visual Genome with soft link): Total disk read is around 300 MB/s for one model

CPU: 10-core Xeon
GPUs: 2x TITAN X

I tried running one model with --workers 0 while keeping the other at 2. Same problem. HOWEVER, with both running --workers 0 it seemed to work for 3 iterations and then it froze again. Still no stack trace 😢

EDIT Apparently there's a Python bug that causes CTRL-C not to be registered:
https://stackoverflow.com/questions/1408356/keyboard-interrupts-with-pythons-multiprocessing-pool

EDIT2 Running only one model with --workers 0 terminates with the following stack trace

  File "train.py", line 370, in <module>
    main()
  File "train.py", line 216, in main
    exp_logger, epoch, args.print_freq)
  File "/home/aosman/vqa/vqa.pytorch/vqa/lib/engine.py", line 12, in train
    for i, sample in enumerate(loader):
  File "/home/aosman/vqa/vqa.pytorch/vqa/lib/dataloader.py", line 166, in __next__
    batch = self.collate_fn([self.dataset[i] for i in indices])
  File "/home/aosman/vqa/vqa.pytorch/vqa/lib/dataloader.py", line 166, in <listcomp>
    batch = self.collate_fn([self.dataset[i] for i in indices])
  File "/home/aosman/vqa/vqa.pytorch/vqa/datasets/vqa.py", line 223, in __getitem__
    item = self.dataset_vgenome[index - len(self.dataset_vqa)]
  File "/home/aosman/vqa/vqa.pytorch/vqa/datasets/vgenome.py", line 46, in __getitem__
    item_img = self.dataset_img.get_by_name(item_qa['image_name'])
  File "/home/aosman/vqa/vqa.pytorch/vqa/datasets/features.py", line 66, in get_by_name
    return self[index]
  File "/home/aosman/vqa/vqa.pytorch/vqa/datasets/features.py", line 37, in __getitem__
    item['visual'] = self.get_features(index)
  File "/home/aosman/vqa/vqa.pytorch/vqa/datasets/features.py", line 42, in get_features
    return torch.Tensor(self.dataset_features[index])
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-s_7obrrg-build/h5py/_objects.c:2840)
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-s_7obrrg-build/h5py/_objects.c:2798)
  File "/home/aosman/miniconda2/envs/vqa/lib/python3.6/site-packages/h5py/_hl/dataset.py", line 494, in __getitem__
    self.id.read(mspace, fspace, arr, mtype, dxpl=self._dxpl)
KeyboardInterrupt

@ili3p
Copy link

ili3p commented Aug 2, 2017

Try running the models on separate gpus. There is no point to running them on 2 gpus if they share them

@ahmedmagdiosman
Copy link
Author

@ilija139 There is a point. There's a disk bottleneck not a computational bottleneck.

@ili3p
Copy link

ili3p commented Aug 2, 2017

Exactly. That means there is no point to run one model on two gpus.

There are only two reasons to run a model on multiple gpus.

  1. Model is too big to fit in gpu memory i.e. You dont want to use too small batch size so you split the batch and run each part on separate gpu. This is what torch/pytorch DataParallel is doing.

  2. There is gpu computational bottleneck so you want to split the computations across multiple gpus.

In your case 1. Doesn't apply, and 2. Doesnt make sense if you are going to run two models on the same two gpus. It's always better to run them on one separate gpu. Running a model on multiple gpus comes at a cost.

Btw there is no data loading bottleneck. You should store the image features as compressed numpy array. Hdf5 is not made for this use case. See MCB code on how you can store the features as compressed numpy. For some reason the caffe resnet model outputs more sparse image features that can be compressed really well. So vqa2 trainging set only takes 19gb and 9gb for val set. You can cache 28gb in 64gb ram easily. Even if you use visualgenome dataset since half of the vg images are from coco i.e. The same as vqa2.

And finally the stack trace you give above says keyboard interrupt i.e. someone pressed ctrl+c.

@Cadene
Copy link
Owner

Cadene commented Aug 2, 2017

@ilija139 I am curious about what you've just said about hdf5. What is the use case of hdf5 in your point of view? Also a pull request to be able to use the numpy files from MCB code would be greatly appreciated :)

Thanks

@ili3p
Copy link

ili3p commented Aug 3, 2017

@Cadene hdf5 = hierarchical data format, where is the hierarchy in this case? Also not sure how well the OS caches data read from hdf5 in RAM. If you use files to store the data, then you can either move them to tmpfs or let the OS cache them in RAM.
Anyway, there is no point in discussing since you need almost 10 times more space so clearly it's not a good choice.

I don't have time to properly modify your code to use the numpy features. But the modifications are trivial and you can see how I did it here: https://github.com/ilija139/vqa.pytorch/tree/numpy_features

And for how to obtain the numpy features https://github.com/akirafukui/vqa-mcb/blob/master/preprocess/extract_resnet.py

Note that, like I already said, for some reason the caffe resnet model outputs features that can be compressed a lot better than torch resnet features. The difference is about 2-3 times smaller files.

@Cadene
Copy link
Owner

Cadene commented Aug 3, 2017

@ilija139 It was not clear to me that caching the features in RAM/tmpfs was better than using hdf5 which is known to be designed for efficient I/O and for high volume. It seems that I was wrong. Unfortunately, I neither have time to add this feature for now.

Thanks for your answer.

@ahmedmagdiosman
Copy link
Author

@ilija139 derp, this is what happens when I don't sleep 🤦‍♂️
You're right, I have no idea why did this make sense to me.

Thanks for that idea about caching the data, however, I don't have that much ram for this to work.

@Cadene I tried running training for 2 models, this time with 1 model per GPU. And it works! No issues so far. So I think there's some kind of semaphore/locking mechanism causing both dataloaders to fight when GPUs are shared. I have no idea if this the dataloader's limitation or pytorch honestly.

@Cadene
Copy link
Owner

Cadene commented Aug 3, 2017

I never experimented such a thing with the multi-GPU setup. However, it was more efficient for me to run one experiment per GPU.

I let the issue open just in case someone else encounter the same issue, but I will edit the title.

@Cadene Cadene changed the title Training multiple models simultaneously halts both (possible deadlock?) Training one model on multiple GPUs simultaneously not working (possible deadlock?) Aug 3, 2017
@ili3p
Copy link

ili3p commented Aug 4, 2017

@ahmedmagdiosman If the only thing you changed was to run the models on separate GPUs then this is definitely not dataloader's limitation. Also changing the number of workers didn't solve the problem. So it's definitely pytorch's, or more specifically CUDA's. It's a known problem when you run multiple processes on one GPU device. It just gets worse when you run two models on two gpus at the same time...

And about the caching idea, it will still help if you use the compressed numpy features instead of hdf5, no matter the amount of RAM you have. 10 times fewer data to read so 10 times faster I/O.

@ahmedmagdiosman
Copy link
Author

@ilija139 it seems there's already an issue on the pytorch page
pytorch/pytorch#2245

As for the data, actually I already have the compressed numpy features but I haven't gotten into integrating them with this project yet. I noticed that I didn't have this disk bottlenecking with the numpy features from MCB.

Thanks you both for your comments!

@Cadene Cadene closed this as completed Sep 3, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants