-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
train_batch_size + dataset + actual batch size #62
Comments
|
@tjruwase could you clarify - how does the parallelization work with a custom dataloader? Do you need to make sure the dataloader uses the local rank as input to load a separate portion of the dataset manually? |
This is correct, it is the responsibility of the custom loader to load the appropriate dataset portion based on local rank (or whatever else the client finds appropriate) into GPU memory. DeepSpeed does not impose any restrictions on the custom dataloader, and does not perform any sanity checks. Hope that clears things up. |
@tjruwase Thanks for the clarification. Assuming I have a custom data generator like that:
Does this mean:
|
|
@tjruwase Thanks a lot that answers all my questions for now :) |
* updating deepspeed config for Sparse Transformer * Adding/updating sparsity config (microsoft#68) * adding/updating sparsity config patterns * adding random to Variable sparsity * fixing a typo * applying comment adding missing argument docstring * updating deepspeed config for Sparse Transformer * updating sparsity config for DeepSpeed parameter list * adding unit test/s for sparse transformer (microsoft#60) * adding unit test/s for sparse transformer * file-name change update * updated tests based on new list of sparsity configs * Adding/updating sparsity config (microsoft#68) * adding/updating sparsity config patterns * adding random to Variable sparsity * fixing a typo * applying comment adding missing argument docstring * adding unit test/s for sparse transformer * file-name change update * updated tests based on new list of sparsity configs * skipping a test if it is run on gpu with compute capability < 7; minimum V100 * fix a naming issue in utils file: bert_mode -> bert (microsoft#69) * updating deepspeed config for Sparse Transformer * updating sparsity config for DeepSpeed parameter list
* test commits in DSE * Support for porgressive layer dropping * Minor changes on PLD * update the finetune script * PLD client * Remove theta option Co-authored-by: Minjia Zhang <minjiaz@microsoft.com> Co-authored-by: Minjia Zhang <minjiaz@microsoft.com>
@tjruwase assuming I'm using some customer data loader as @agemagician above but in a multi-node, multi-gpu settings, how would I go about and send tensors to the right GPU? Do I still do |
@tnq177, I believe you want |
* Removed megatron-lm from requirements * Revert "Removed megatron-lm from requirements" This reverts commit 4790815. * cherrypicked upstream commit * Revert "cherrypicked upstream commit" This reverts commit fbbff89. * Removed megatron-lm from requirements --------- Co-authored-by: Jeff Rasley <jerasley@microsoft.com>
Hello,
I have 4 questions for clarification:
only and leave deepspeed to calculate train_batch_size automatically ?
The text was updated successfully, but these errors were encountered: