New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-gpu issues (not utilizing >1 gpu?) #9
Comments
Hello @nmallinar thanks the question. While we haven't tried to do multi-GPU training, following the docs the second approach is the correct one, using the arguments in RunConfig, since the scope is use in Keras models, not in Estimator based ones. I do wonder if this is supported in TPUEstimator/TPURunConfig as we are using, but if that's not the case, it should be easy to change. I found this guide for multi GPU training on TF1 that might be useful, make sure to check the Estimator section. Apparently there's a JSON Environment variable that has to be properly set up. The error you mention seems strange, since the third argument for self._call_input_fn is declared here. Can you do |
@eisenjulian Yes, I was thinking about adding switches from TPUEstimator -> Estimator in absence of use_tpu, I have seen similar designs in TF multi-gpu BERT training code in other repos. However, the error seems to indicate that the problem is not from passing in the distributed strategy object but rather something in the input_fn construction/calling and there may not need to be such switches (or even with the switches, the input_fn may still throw this error with an Estimator - I'll update when I get around to checking this). Results of pip show:
I will look further into this guide you posted as well, thanks for the reference. |
Switching from TPUEstimator -> Estimator solves the input_fn issue, seems like they both have different self._call_input_fn signatures. So I just switch on all estimators / dependent objects to non-tpu objects and their relevant params. This still left one issue: the BERT optimizer in its current form is not multi-gpu friendly, so I adapted this implementation for multi-gpu in models/bert/optimization.py: https://github.com/HaoyuHu/bert-multi-gpu/blob/master/custom_optimization.py and now I am properly able to train. In case anybody else is following this solution path: train_batch_size should now be specified as per-gpu batch size and then internally when computing num_train_steps you should multiply by N_GPUs to use your effective batch size. I am unable to get the gradient accumulation wrapper as it is now to work in the multi-gpu setting, but I can update with a solution if I end up trying to make it work. Anyway, I will close this issue for now. Thanks! |
Also running into the same problem on multi-gpu, single gpu works fine but is much much slower than tpu. i have changed all TPUEstimator or Estimator objects, also tried to adapt BERT optimiser as per the link shared, am using MirroredStrategy and tried with and without specifying the devices . But the problem is either it doesnt run the process on the GPUs or if it shows as running, the volatile util shows 0% on both, believe its doesnt work. Would appreciate if you can share more about your work around .. |
Hi @nmallinar, can you share your work about where we need to change from |
Hello @dhuy237, my implementation is based on a slightly outdated version of the Tapas codebase and unfortunately at the time I got a little busy to re-submit the code. I will plan to resolve those diffs and host a fork or submit a PR accordingly. If there is anything specific you are having trouble with on your end I may be able to help you debug though, as I ran into many errors along the way and might be able to help you avoid some of those same mistakes. |
Hi all,
I got it to work with some changes, it took me a week or so to get it
working, can share my changes if that helps. I can consolidate the changes
and share by tonight or tomorrow. you can also try that
Thanks
Sarah
…On Thu, Aug 13, 2020 at 11:49 AM Neil Mallinar ***@***.***> wrote:
Hello @dhuy237 <https://github.com/dhuy237>, my implementation is based
on a slightly outdated version of the Tapas codebase and unfortunately at
the time I got a little busy to re-submit the code. I will plan to resolve
those diffs and host a fork or submit a PR accordingly. If there is
anything specific you are having trouble with on your end I may be able to
help you debug though, as I ran into many errors along the way and might be
able to help you avoid some of those same mistakes.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#9 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKABY2T4Y3BZ3E7PBTNWFLSAQYSZANCNFSM4NA65BEA>
.
|
@nmallinar I still have this error when trying to train the model with a single GPU @sarahpanda Hope your solution can help me. |
@dhuy237 so these are the non-TPU version of objects in run_task_main.py that I use:
and in tapas_classifier_model.py:
I could not get it to work with tf *.tpu classes. I think I ran into this when I used the TPU version of one of these still. |
@nmallinar I didn't notice that this repo is |
@sarahpand It would be great if you could share your solution here! |
Sure let me do that..
Sarah
…On Fri, Aug 14, 2020 at 12:34 AM Thomas Müller ***@***.***> wrote:
@sarahpand <https://github.com/sarahpand> It would great if you could
share your solution here!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#9 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKABYZV7AXFHLJ25GNGFZ3SATSHFANCNFSM4NA65BEA>
.
|
After changing my code like @nmallinar:
I don't get this error anymore: But I got this error:
Do you guys know why I got this? |
Hello @dhuy237 I am bit confused by your stacktrace since I don't recognize the paths of the files, specially the run_coqa.py one. Can you confirm you are running the correct binary? |
@eisenjulian I trying to run this repo https://github.com/stevezheng23/xlnet_extension_tf. So it is different from this repo. But I got the same error as @nmallinar had before. |
I recommend you ask in their repo. It does seem that changing TPUEstimator -> Estimator and TPUEstimatorSpec for EstimatorSpec fixed the signature issue, so consider double checking that you didn't miss any instance of TPUEstimators. |
Hello,
I am trying to run this codebase on a single machine with eight GPUs. I have installed all through requirements.txt and prepped the data. When I run, I am only able to use train_batch_size=8 and notice that only one of the eight GPUs is utilized (the other 7 show ~300MB of data on device while the first GPU shows ~15GB). Additionally, while I can see this usage of the GPU(s) by the run script, I get an output message in the train log of:
I0514 19:25:27.752172 139628342118208 tpu_estimator.py:2965] Running train on CPU
, though I have been ignoring this for now. So I am trying to get the other seven GPUs in the loop so that I can train with train_batch_size=64.I initially tried wrapping the optimization code in:
and I notice that the model is properly replicated across the eight GPUs, however I cannot expand my train_batch_size to any multiple larger than 8. I tried wrapping the dataset object, at the end of
input_fn
before returning, instrategy.experimental_distribute_dataset(ds)
to see if it was a matter of not sending batches to each device. However, I ran into deeper errors that I am unfamiliar with when pursuing this route (if this is a preferable way to enable multi-GPU I could update this issue with stack traces I got after running with the aforementioned changes).Before debugging further in this direction, I tried to step back to the outer run_task_main.py after reading that you can instead pass MirroredStrategy or CentralStorageStrategy objects directly into the RunConfig that goes into an Estimator. So I undid the aforementioned changes that I manually made in the lower levels (e.g. reset repo back to master) and added to:
However, I now run into the error:
which I suspect may have to do with the functools.partial wrap around input_fn, but I am having trouble understanding this or determining next steps (I am generally unfamiliar with Tensorflow as a library).
If anybody can help me with this it would be greatly appreciated. Thanks so much for the work and time!
The text was updated successfully, but these errors were encountered: