-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support alternative mapping TP->PP->DP #8881
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
jenkins |
Signed-off-by: jxin <jxin@nvidia.com>
Signed-off-by: jxin <jxin@nvidia.com>
for more information, see https://pre-commit.ci Signed-off-by: jxin <jxin@nvidia.com>
* remove split rank Signed-off-by: Chen Cui <chcui@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Chen Cui <chcui@nvidia.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: jxin <jxin@nvidia.com>
* [SD] remove synchronizations Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * Typo in logging Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * [SD] Remove the sync invoked by tensor allocation. Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * Make the model sync-free again. Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * Support PyTorch Lightning 2 for full iteration CUDA graph callback. Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * Add documentation about CUDAGraphCallback. Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * Support synthetic dataset. Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * Fix typo. Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * Fix the bug of wrong GN groups. Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * remove circular dependency Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * Change naming for offline clip Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * Add exception when no gradient allreduce is called. Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * rename enable_amp_o2_fp16 -> unet_precision Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * Adjustments to PyTorch 2.3 Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * fix CUDA Graphs support in SD Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * Document incompatibility betwee pipe parallelism and full iteration CUDA Graph callback for SD. Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * update CUDA Graphs callback to PTL 2.1 Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * [SD] Full-fp16: push normalization layers in FP16. Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * [SD] enable CUDA Graphs in examples Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * [SD] add model warmup Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * fix sanity-check for CUDA Graphs Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * [SD] CUDA Graphs test Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * Update cuda graph jenkins test Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * fix typo Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * fix path in test Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * handle unexpected precision value for PipelineMixedPrecisionPlugin Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * remove unused import Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * replace unsupported syntax Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * typo Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * Add a gurad for megatron fused adam Signed-off-by: Mingyuan Ma <mingyuanm@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix bugs for FSDP in clip_grads Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> * [SD] skip model warmup when CUDA Graph not captured Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> --------- Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com> Signed-off-by: Mingyuan Ma <mingyuanm@nvidia.com> Signed-off-by: Marek Wawrzos <marek.28.93@gmail.com> Co-authored-by: Szymon Mikler <smikler@nvidia.com> Co-authored-by: Wil Kong <alpha0422@gmail.com> Co-authored-by: Mengdi Wang <didow@nvidia.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Ming <111467530+Victor49152@users.noreply.github.com> Co-authored-by: Mingyuan Ma <mingyuanm@nvidia.com> Signed-off-by: jxin <jxin@nvidia.com>
Signed-off-by: jxin <jxin@nvidia.com>
* fix get_params_for_weight_decay_optimization Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * filter returned values by presence of parameters Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * use module_._parameters.items instead of .named_parameters to avoid duplicate params Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> --------- Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> Signed-off-by: jxin <jxin@nvidia.com>
* Move precision restoration inside megtron_trainer_builder Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Don't enforce O1 in eval Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * safer prefix replacer Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * comment Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * drop conf resolve Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * typo Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * fix Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> --------- Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: jxin <jxin@nvidia.com>
* Add deploy triton and query scripts Signed-off-by: Onur Yilmaz <oyilmaz@nvidia.com> * Update scripts based on reviews Signed-off-by: Onur Yilmaz <oyilmaz@nvidia.com> --------- Signed-off-by: Onur Yilmaz <oyilmaz@nvidia.com> Signed-off-by: jxin <jxin@nvidia.com>
Signed-off-by: jiemingz <jiemingz@nvidia.com> Co-authored-by: jiemingz <jiemingz@nvidia.com> Co-authored-by: Tim Moon <4406448+timmoon10@users.noreply.github.com> Signed-off-by: jxin <jxin@nvidia.com>
* Enable DGRAD RS overlap Signed-off-by: Jaemin Choi <jaeminc@nvidia.com> * Support cases where TE version is new but NeMo/MCore is not Signed-off-by: Jaemin Choi <jaeminc@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Clean up syntax Signed-off-by: Jaemin Choi <jaeminc@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Jaemin Choi <jaeminc@nvidia.com> Co-authored-by: Jaemin Choi <jaeminc@nvidia.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: jxin <jxin@nvidia.com>
Signed-off-by: Chen Cui <chcui@nvidia.com> Signed-off-by: jxin <jxin@nvidia.com>
* add mcore dataset updates Signed-off-by: dimapihtar <dpihtar@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix mcore import Signed-off-by: dimapihtar <dpihtar@gmail.com> * revert config Signed-off-by: dimapihtar <dpihtar@gmail.com> * update mcore installation Signed-off-by: dimapihtar <dpihtar@gmail.com> * update mcore installation Signed-off-by: dimapihtar <dpihtar@gmail.com> * revert config Signed-off-by: dimapihtar <dpihtar@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update apex, TE & PyT Signed-off-by: dimapihtar <dpihtar@gmail.com> * setup pythonpath for mcore Signed-off-by: dimapihtar <dpihtar@gmail.com> * add mcore to python path Signed-off-by: dimapihtar <dpihtar@gmail.com> * add mcore to pythonpath Signed-off-by: dimapihtar <dpihtar@gmail.com> * update pythonpath for mcore Signed-off-by: dimapihtar <dpihtar@gmail.com> * change pythonpath for mcore Signed-off-by: dimapihtar <dpihtar@gmail.com> * update mcore pythonpath Signed-off-by: dimapihtar <dpihtar@gmail.com> * update mcore pythonpath Signed-off-by: dimapihtar <dpihtar@gmail.com> * revert mcore ds changes Signed-off-by: dimapihtar <dpihtar@gmail.com> * revert config Signed-off-by: dimapihtar <dpihtar@gmail.com> * add qk_layernorm support for Falcon self attn submodule Signed-off-by: dimapihtar <dpihtar@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * code style changes Signed-off-by: dimapihtar <dpihtar@gmail.com> * add nemo implementation for get_gpt_layer_ammo_spec Signed-off-by: dimapihtar <dpihtar@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix typo Signed-off-by: dimapihtar <dpihtar@gmail.com> * skip Llama2 - INT8 SQ test Signed-off-by: dimapihtar <dpihtar@gmail.com> * skip Llama2 - INT8 SQ test Signed-off-by: dimapihtar <dpihtar@gmail.com> * comment out NeMo PTQ test Signed-off-by: dimapihtar <dpihtar@gmail.com> * bert mcore updates Signed-off-by: dimapihtar <dpihtar@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add qk_layernorm support for bert's self attention submodule Signed-off-by: dimapihtar <dpihtar@gmail.com> * add qk_layernorm support for bert's self attn submodule Signed-off-by: dimapihtar <dpihtar@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * change mcore commit Signed-off-by: dimapihtar <dpihtar@gmail.com> * switch back to mcore original Signed-off-by: dimapihtar <dpihtar@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * bugfix Signed-off-by: dimapihtar <dpihtar@gmail.com> * update TE Signed-off-by: dimapihtar <dpihtar@gmail.com> * change legacy model to mcore based model for lora Signed-off-by: dimapihtar <dpihtar@gmail.com> * remove unnecessary files Signed-off-by: dimapihtar <dpihtar@gmail.com> * update mcore commit Signed-off-by: dimapihtar <dpihtar@gmail.com> * uncomment PTQ tests Signed-off-by: dimapihtar <dpihtar@gmail.com> * remove sbert Signed-off-by: dimapihtar <dpihtar@gmail.com> * switch back to mcore main Signed-off-by: dimapihtar <dpihtar@gmail.com> * remove unused variable Signed-off-by: dimapihtar <dpihtar@gmail.com> * comment out CUDA Graph test Signed-off-by: dimapihtar <dpihtar@gmail.com> --------- Signed-off-by: dimapihtar <dpihtar@gmail.com> Signed-off-by: Dmytro Pykhtar <37850217+dimapihtar@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Tim Moon <4406448+timmoon10@users.noreply.github.com> Co-authored-by: Pablo Garay <palenq@gmail.com> Signed-off-by: jxin <jxin@nvidia.com>
* Use Label-Looping algorithm for RNN-T decoding by default * Fix loop labels + stateless decoding --------- Signed-off-by: Vladimir Bataev <vbataev@nvidia.com> Signed-off-by: jxin <jxin@nvidia.com>
Signed-off-by: jxin <jxin@nvidia.com>
* Fix packed seq doc math rendering issue Signed-off-by: Chen Cui <chcui@nvidia.com> * Fix packed seq doc math rendering issue Signed-off-by: Chen Cui <chcui@nvidia.com> --------- Signed-off-by: Chen Cui <chcui@nvidia.com> Signed-off-by: jxin <jxin@nvidia.com>
* Move logic for FP32 embedding grads to models Signed-off-by: Tim Moon <tmoon@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Tim Moon <tmoon@nvidia.com> Signed-off-by: Tim Moon <4406448+timmoon10@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Eric Harper <complex451@gmail.com> Signed-off-by: jxin <jxin@nvidia.com>
* add none check Signed-off-by: Nithin Rao Koluguri <nithinraok> * add for restore func Signed-off-by: Nithin Rao Koluguri <nithinraok> --------- Signed-off-by: Nithin Rao Koluguri <nithinraok> Co-authored-by: Nithin Rao Koluguri <nithinraok> Signed-off-by: jxin <jxin@nvidia.com>
akoumpa
requested changes
Apr 12, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please add a test for the fake_initialize_model_parallel function? It doesn't have to be 1000 lines, just go one by one on the parameters (after world-size and rank) and test them. Thank you.
Will reopen with another branch soon. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do ?
This PR will support new parallel initialize order with megatron's new feature: TP -> PP -> DP
Collection: [Note which collection this PR will affect]
Changelog
Usage
just make sure use enable
use-tp-pp-dp-mapping
knob.Jenkins CI
To run Jenkins, a NeMo User with write access must comment
jenkins
on the PR.Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information