Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train_batch_size + dataset + actual batch size #62

Closed
agemagician opened this issue Feb 11, 2020 · 8 comments
Closed

train_batch_size + dataset + actual batch size #62

agemagician opened this issue Feb 11, 2020 · 8 comments

Comments

@agemagician
Copy link

agemagician commented Feb 11, 2020

Hello,

I have 4 questions for clarification:

  1. Why we should pass the training_data to the deepspeed.initialize to generate a new trainloader rather than using a normal torch trainloader ?
  2. Can we use a custom pytorch trainloader in case we have custom dataset that returns for example inputs, outputs and mask ?
  3. If the actual batch size that is used to be passed to the model is different than the train_batch_size in the json file, what will happen ?
  4. Can we just define gradient_accumulation_steps and train_micro_batch_size_per_gpu
    only and leave deepspeed to calculate train_batch_size automatically ?
@tjruwase
Copy link
Contributor

  1. Passing training_data to deepspeed.initialize is optional, and not required. Some models can benefit from deepspeed's i/o optimizations. However, using torch trainloader is fine.

  2. In our experience, deepspeed works well with custom trainloaders and datasets. So I suspect the answer to that is yes, but please share any issues you run into.

  3. The train_batch_size in json file is used to perform gradient accumulation and compute progress statistics, so a mismatch could result in incorrect training and confusing statistics.

  4. Yes, this is supported.

@msdejong
Copy link

@tjruwase could you clarify - how does the parallelization work with a custom dataloader? Do you need to make sure the dataloader uses the local rank as input to load a separate portion of the dataset manually?

@tjruwase
Copy link
Contributor

This is correct, it is the responsibility of the custom loader to load the appropriate dataset portion based on local rank (or whatever else the client finds appropriate) into GPU memory. DeepSpeed does not impose any restrictions on the custom dataloader, and does not perform any sanity checks.

Hope that clears things up.

@agemagician
Copy link
Author

agemagician commented Feb 11, 2020

@tjruwase Thanks for the clarification.
Just one more question.

Assuming I have a custom data generator like that:

for batch_idx, batch in enumerate(DatasetGenerator):
            data, target,src_padding = batch['input'].to(model_engine.local_rank), batch['target'].to(model_engine.local_rank), batch['padding_mask'].to(model_engine.local_rank)

Does this mean:

  1. The batch size should be equal to train_micro_batch_size_per_gpu ?
  2. It should provide different/random batch for each gpu/node ?

@tjruwase
Copy link
Contributor

  1. Yes, batch_size should be equal to train_micro_batch_size_per_gpu, which is batch size for a single step on one gpu.

  2. Assuming DatasetGenerator is returning the correct batch for each gpu, then this would be correct since the .to() is simply moving the data bits into gpu memory.

@agemagician
Copy link
Author

@tjruwase Thanks a lot that answers all my questions for now :)

jeffra pushed a commit to jeffra/DeepSpeed that referenced this issue Aug 28, 2020
* updating deepspeed config for Sparse Transformer

* Adding/updating sparsity config (microsoft#68)

* adding/updating sparsity config patterns

* adding random to Variable sparsity

* fixing a typo

* applying comment adding missing argument docstring

* updating deepspeed config for Sparse Transformer

* updating sparsity config for DeepSpeed parameter list

* adding unit test/s for sparse transformer (microsoft#60)

* adding unit test/s for sparse transformer

* file-name change update

* updated tests based on new list of sparsity configs

* Adding/updating sparsity config (microsoft#68)

* adding/updating sparsity config patterns

* adding random to Variable sparsity

* fixing a typo

* applying comment adding missing argument docstring

* adding unit test/s for sparse transformer

* file-name change update

* updated tests based on new list of sparsity configs

* skipping a test if it is run on gpu with compute capability < 7; minimum V100

* fix a naming issue in utils file: bert_mode -> bert (microsoft#69)

* updating deepspeed config for Sparse Transformer

* updating sparsity config for DeepSpeed parameter list
rraminen pushed a commit to rraminen/DeepSpeed that referenced this issue Apr 28, 2021
* test commits in DSE

* Support for porgressive layer dropping

* Minor changes on PLD

* update the finetune script

* PLD client

* Remove theta option

Co-authored-by: Minjia Zhang <minjiaz@microsoft.com>

Co-authored-by: Minjia Zhang <minjiaz@microsoft.com>
delock pushed a commit to delock/DeepSpeedSYCLSupport that referenced this issue Sep 21, 2022
@tnq177
Copy link

tnq177 commented Sep 21, 2022

@tjruwase assuming I'm using some customer data loader as @agemagician above but in a multi-node, multi-gpu settings, how would I go about and send tensors to the right GPU? Do I still do tensor.to(engine.local_rank) or global_rank please? Thanks.

@tjruwase
Copy link
Contributor

tjruwase commented Sep 21, 2022

@tnq177, I believe you want local_rank as crossing node boundaries requires communication collectives, like broadcast.

rraminen added a commit to rraminen/DeepSpeed that referenced this issue May 3, 2023
* Removed megatron-lm from requirements

* Revert "Removed megatron-lm from requirements"

This reverts commit 4790815.

* cherrypicked upstream commit

* Revert "cherrypicked upstream commit"

This reverts commit fbbff89.

* Removed megatron-lm from requirements

---------

Co-authored-by: Jeff Rasley <jerasley@microsoft.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants