Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-gpu issues (not utilizing >1 gpu?) #9

Closed
nmallinar opened this issue May 14, 2020 · 16 comments
Closed

Multi-gpu issues (not utilizing >1 gpu?) #9

nmallinar opened this issue May 14, 2020 · 16 comments

Comments

@nmallinar
Copy link

nmallinar commented May 14, 2020

Hello,

I am trying to run this codebase on a single machine with eight GPUs. I have installed all through requirements.txt and prepped the data. When I run, I am only able to use train_batch_size=8 and notice that only one of the eight GPUs is utilized (the other 7 show ~300MB of data on device while the first GPU shows ~15GB). Additionally, while I can see this usage of the GPU(s) by the run script, I get an output message in the train log of: I0514 19:25:27.752172 139628342118208 tpu_estimator.py:2965] Running train on CPU, though I have been ignoring this for now. So I am trying to get the other seven GPUs in the loop so that I can train with train_batch_size=64.

I initially tried wrapping the optimization code in:

strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
   # rest here from bert/optimization.py

and I notice that the model is properly replicated across the eight GPUs, however I cannot expand my train_batch_size to any multiple larger than 8. I tried wrapping the dataset object, at the end of input_fn before returning, in strategy.experimental_distribute_dataset(ds) to see if it was a matter of not sending batches to each device. However, I ran into deeper errors that I am unfamiliar with when pursuing this route (if this is a preferable way to enable multi-GPU I could update this issue with stack traces I got after running with the aforementioned changes).

Before debugging further in this direction, I tried to step back to the outer run_task_main.py after reading that you can instead pass MirroredStrategy or CentralStorageStrategy objects directly into the RunConfig that goes into an Estimator. So I undid the aforementioned changes that I manually made in the lower levels (e.g. reset repo back to master) and added to:

run_config = tf.estimator.tpu.RunConfig(
  ...
  train_distribute=strategy,
  ...

However, I now run into the error:

Traceback (most recent call last):
  File "tapas/run_task_main.py", line 777, in <module>
    app.run(main)
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "tapas/run_task_main.py", line 762, in main
    loop_predict=FLAGS.loop_predict,
  File "tapas/run_task_main.py", line 440, in _train_and_predict
    max_steps=tapas_config.num_train_steps,
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 2876, in train
    rendezvous.raise_errors()
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/tpu/error_handling.py", line 131, in raise_errors
    six.reraise(typ, value, traceback)
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/six.py", line 703, in reraise
    raise value
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 2871, in train
    saving_listeners=saving_listeners)
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 367, in train
    loss = self._train_model(input_fn, hooks, saving_listeners)
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1156, in _train_model
    return self._train_model_distributed(input_fn, hooks, saving_listeners)
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1219, in _train_model_distributed
    self._config._train_distribute, input_fn, hooks, saving_listeners)
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1255, in _actual_train_model_distributed
    input_fn, ModeKeys.TRAIN, strategy)
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1009, in _get_iterator_from_input_fn
    lambda input_context: self._call_input_fn(input_fn, mode,
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py", line 774, in make_input_fn_iterator
    input_fn, replication_mode)
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py", line 406, in make_input_fn_iterator
    input_fn, replication_mode=replication_mode)
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/tensorflow/python/distribute/parameter_server_strategy.py", line 318, in _make_input_fn_iterator
    self._container_strategy())
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/tensorflow/python/distribute/input_lib.py", line 550, in __init__
    result = input_fn(ctx)
  File "/mydata/repos/tapas2/venv/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1010, in <lambda>
    input_context))
TypeError: _call_input_fn() takes 3 positional arguments but 4 were given

which I suspect may have to do with the functools.partial wrap around input_fn, but I am having trouble understanding this or determining next steps (I am generally unfamiliar with Tensorflow as a library).

If anybody can help me with this it would be greatly appreciated. Thanks so much for the work and time!

@eisenjulian
Copy link
Collaborator

Hello @nmallinar thanks the question. While we haven't tried to do multi-GPU training, following the docs the second approach is the correct one, using the arguments in RunConfig, since the scope is use in Keras models, not in Estimator based ones. I do wonder if this is supported in TPUEstimator/TPURunConfig as we are using, but if that's not the case, it should be easy to change.

I found this guide for multi GPU training on TF1 that might be useful, make sure to check the Estimator section. Apparently there's a JSON Environment variable that has to be properly set up.

The error you mention seems strange, since the third argument for self._call_input_fn is declared here. Can you do pip show to get the version of tensorflow and tensor_flow estimator that you have on your runtime, they should be 1.14.

@nmallinar
Copy link
Author

nmallinar commented May 15, 2020

@eisenjulian Yes, I was thinking about adding switches from TPUEstimator -> Estimator in absence of use_tpu, I have seen similar designs in TF multi-gpu BERT training code in other repos. However, the error seems to indicate that the problem is not from passing in the distributed strategy object but rather something in the input_fn construction/calling and there may not need to be such switches (or even with the switches, the input_fn may still throw this error with an Estimator - I'll update when I get around to checking this).

Results of pip show:

Name: tensorflow-gpu
Version: 1.14.0
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: packages@tensorflow.org
License: Apache 2.0
Location: /mydata/repos/tapas2/venv/lib/python3.7/site-packages
Requires: protobuf, tensorboard, keras-applications, numpy, grpcio, wheel, tensorflow-estimator, wrapt, google-pasta, termcolor, astor, keras-preprocessing, six, absl-py, gast
Required-by: tapas

Name: tensorflow-estimator
Version: 1.14.0
Summary: TensorFlow Estimator.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: UNKNOWN
License: Apache 2.0
Location: /mydata/repos/tapas2/venv/lib/python3.7/site-packages
Requires:
Required-by: tensorflow-gpu

I will look further into this guide you posted as well, thanks for the reference.

@nmallinar
Copy link
Author

Switching from TPUEstimator -> Estimator solves the input_fn issue, seems like they both have different self._call_input_fn signatures. So I just switch on all estimators / dependent objects to non-tpu objects and their relevant params.

This still left one issue: the BERT optimizer in its current form is not multi-gpu friendly, so I adapted this implementation for multi-gpu in models/bert/optimization.py: https://github.com/HaoyuHu/bert-multi-gpu/blob/master/custom_optimization.py and now I am properly able to train.

In case anybody else is following this solution path: train_batch_size should now be specified as per-gpu batch size and then internally when computing num_train_steps you should multiply by N_GPUs to use your effective batch size. I am unable to get the gradient accumulation wrapper as it is now to work in the multi-gpu setting, but I can update with a solution if I end up trying to make it work. Anyway, I will close this issue for now.

Thanks!

@sarahpanda
Copy link

Also running into the same problem on multi-gpu, single gpu works fine but is much much slower than tpu. i have changed all TPUEstimator or Estimator objects, also tried to adapt BERT optimiser as per the link shared, am using MirroredStrategy and tried with and without specifying the devices . But the problem is either it doesnt run the process on the GPUs or if it shows as running, the volatile util shows 0% on both, believe its doesnt work. Would appreciate if you can share more about your work around ..

@dhuy237
Copy link

dhuy237 commented Aug 6, 2020

Hi @nmallinar, can you share your work about where we need to change from TPUEstimator to Estimator because I have the same issue with you. Thanks.

@nmallinar
Copy link
Author

Hello @dhuy237, my implementation is based on a slightly outdated version of the Tapas codebase and unfortunately at the time I got a little busy to re-submit the code. I will plan to resolve those diffs and host a fork or submit a PR accordingly. If there is anything specific you are having trouble with on your end I may be able to help you debug though, as I ran into many errors along the way and might be able to help you avoid some of those same mistakes.

@sarahpanda
Copy link

sarahpanda commented Aug 13, 2020 via email

@dhuy237
Copy link

dhuy237 commented Aug 14, 2020

@nmallinar I still have this error when trying to train the model with a single GPU
TypeError: _call_input_fn() takes 3 positional arguments but 4 were given.
And I don't know where to start to fix this bug.

@sarahpanda Hope your solution can help me.

@nmallinar
Copy link
Author

nmallinar commented Aug 14, 2020

@dhuy237 so these are the non-TPU version of objects in run_task_main.py that I use:

    run_config = tf.estimator.RunConfig(...)
    estimator = tf.estimator.Estimator(...,
        model_fn=model_fn,
        config=run_config)

and in tapas_classifier_model.py:

output_spec = tf.estimator.EstimatorSpec(...)

I could not get it to work with tf *.tpu classes. I think I ran into this when I used the TPU version of one of these still.

@dhuy237
Copy link

dhuy237 commented Aug 14, 2020

@nmallinar I didn't notice that this repo is tapas. I am trying to run this repo https://github.com/zihangdai/xlnet. I will check your solution with my project. Thank for your help.

@ghost ghost reopened this Aug 14, 2020
@ghost
Copy link

ghost commented Aug 14, 2020

@sarahpand It would be great if you could share your solution here!

@sarahpanda
Copy link

sarahpanda commented Aug 14, 2020 via email

@dhuy237
Copy link

dhuy237 commented Aug 16, 2020

After changing my code like @nmallinar:

output_spec = tf.estimator.EstimatorSpec(
                    mode=mode,
                    loss=loss,
                    train_op=train_op,
                    scaffold=scaffold_fn)

run_config = tf.estimator.RunConfig(FLAGS)
    estimator = tf.estimator.Estimator(
        model_fn=model_fn,
        config=run_config,
        params={'batch_size': 8})

I don't get this error anymore:
TypeError: _call_input_fn() takes 3 positional arguments but 4 were given

But I got this error:

Traceback (most recent call last):
  File "run_coqa.py", line 1775, in <module>
    tf.app.run()
  File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "run_coqa.py", line 1714, in main
    estimator.train(input_fn=train_input_fn, max_steps=2000) # max_steps=FLAGS.train_steps)
  File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 367, in train
    loss = self._train_model(input_fn, hooks, saving_listeners)
  File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1158, in _train_mo
    return self._train_model_default(input_fn, hooks, saving_listeners)
  File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1192, in _train_mo
    saving_listeners)
  File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1420, in _train_wiec
    scaffold=estimator_spec.scaffold)
  File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 546, in __init_
    self._save_path = os.path.join(checkpoint_dir, checkpoint_basename)
  File "/home/huytran/miniconda3/envs/TF/lib/python3.7/posixpath.py", line 80, in join
    a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not FlagValues

Do you guys know why I got this?

@eisenjulian
Copy link
Collaborator

Hello @dhuy237 I am bit confused by your stacktrace since I don't recognize the paths of the files, specially the run_coqa.py one. Can you confirm you are running the correct binary?

@dhuy237
Copy link

dhuy237 commented Aug 24, 2020

@eisenjulian I trying to run this repo https://github.com/stevezheng23/xlnet_extension_tf. So it is different from this repo. But I got the same error as @nmallinar had before.
After I changed my code as @nmallinar suggested, I have the TypeError. I post the error here just hope someone can help me to solve it.
If it is not appropriate for this post, I can delete my comment. Thanks.

@eisenjulian
Copy link
Collaborator

I recommend you ask in their repo. It does seem that changing TPUEstimator -> Estimator and TPUEstimatorSpec for EstimatorSpec fixed the signature issue, so consider double checking that you didn't miss any instance of TPUEstimators.

@ghost ghost closed this as completed Sep 15, 2020
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants