Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi-GPUs - only using vram, not processing #3

Closed
optfx opened this issue Aug 9, 2021 · 1 comment
Closed

multi-GPUs - only using vram, not processing #3

optfx opened this issue Aug 9, 2021 · 1 comment

Comments

@optfx
Copy link

optfx commented Aug 9, 2021

Hi Tengfei Wang, such a amazing reasearch and many thanks for sharing the code. Very intersting results...

I was able to reproduce some results and really liked the work flow you created of CNN and not Oflow, seams it handles perspective shifts and background better (still playing with it).
The dilate mask makes totally sense...

My question is about multi-GPU to speed up training....im doing these below:

on train.py i removed the # on mirrored_strategy = tf.distribute.MirroredStrategy() line
and
added # on os.environ["CUDA_VISIBLE_DEVICES"] = FLAGS.GPU_ID.

With that seams that is Training is using both GPUs, but also shows that the GPU_0 is using CUDA and processing but GPU_1 only using vram, does not seams to be using CUDA and process, only VRAM.
Is that correct?

Also saw @tf.function down below, but not sure if i should remove # on those lines. Also found #dist_full_ds = mirrored_strategy, tried but seams to do the same thing on second gpu, only using vram, not processing

Is that correct behavior?

Thank you Tengfei Wang and once again, amazing research.

@Tengfei-Wang
Copy link
Owner

Hi, thanks for your interest in our work. We just add the distributed training code 'train_dist.py' in this repo. It may take a few minutes to initialize the distributed training when you run train_dist.py.
Currently, the distributed training code only works on tf 2.0, due to the API changes of TensorFlow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants