Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support alternative mapping TP->PP->DP #8881

Closed
wants to merge 19 commits into from
Closed

Conversation

ftxj
Copy link
Contributor

@ftxj ftxj commented Apr 11, 2024

What does this PR do ?

This PR will support new parallel initialize order with megatron's new feature: TP -> PP -> DP

Collection: [Note which collection this PR will affect]

  • NLP collection

Changelog

  • refactor rank calculate by mcore's new interface.
  • a new knob 'use_tp_pp_dp_mapping' to enable this feature

Usage

  • You can potentially add a usage example below

just make sure use enable use-tp-pp-dp-mapping knob.

Jenkins CI

To run Jenkins, a NeMo User with write access must comment jenkins on the PR.

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

@ericharper
Copy link
Collaborator

jenkins

ftxj and others added 19 commits April 12, 2024 10:12
Signed-off-by: jxin <jxin@nvidia.com>
Signed-off-by: jxin <jxin@nvidia.com>
Signed-off-by: jxin <jxin@nvidia.com>
for more information, see https://pre-commit.ci

Signed-off-by: jxin <jxin@nvidia.com>
* remove split rank

Signed-off-by: Chen Cui <chcui@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Chen Cui <chcui@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: jxin <jxin@nvidia.com>
* [SD] remove synchronizations

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* Typo in logging

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* [SD] Remove the sync invoked by tensor allocation.

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* Make the model sync-free again.

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* Support PyTorch Lightning 2 for full iteration CUDA graph callback.

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* Add documentation about CUDAGraphCallback.

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* Support synthetic dataset.

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* Fix typo.

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* Fix the bug of wrong GN groups.

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* remove circular dependency

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* Change naming for offline clip

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* Add exception when no gradient allreduce is called.

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* rename enable_amp_o2_fp16 -> unet_precision

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* Adjustments to PyTorch 2.3

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* fix CUDA Graphs support in SD

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* Document incompatibility betwee pipe parallelism and full iteration CUDA Graph callback for SD.

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* update CUDA Graphs callback to PTL 2.1

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* [SD] Full-fp16: push normalization layers in FP16.

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* [SD] enable CUDA Graphs in examples

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* [SD] add model warmup

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* fix sanity-check for CUDA Graphs

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* [SD] CUDA Graphs test

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* Update cuda graph jenkins test

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* fix typo

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* fix path in test

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* handle unexpected precision value for PipelineMixedPrecisionPlugin

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* remove unused import

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* replace unsupported syntax

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* typo

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* Add a gurad for megatron fused adam

Signed-off-by: Mingyuan Ma <mingyuanm@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix bugs for FSDP in clip_grads

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

* [SD] skip model warmup when CUDA Graph not captured

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>

---------

Signed-off-by: Marek Wawrzos <mwawrzos@nvidia.com>
Signed-off-by: Mingyuan Ma <mingyuanm@nvidia.com>
Signed-off-by: Marek Wawrzos <marek.28.93@gmail.com>
Co-authored-by: Szymon Mikler <smikler@nvidia.com>
Co-authored-by: Wil Kong <alpha0422@gmail.com>
Co-authored-by: Mengdi Wang <didow@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Ming <111467530+Victor49152@users.noreply.github.com>
Co-authored-by: Mingyuan Ma <mingyuanm@nvidia.com>
Signed-off-by: jxin <jxin@nvidia.com>
* fix get_params_for_weight_decay_optimization

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* filter returned values by presence of parameters

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* use module_._parameters.items instead of .named_parameters to avoid duplicate params

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

---------

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: jxin <jxin@nvidia.com>
* Move precision restoration inside megtron_trainer_builder

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* Don't enforce O1 in eval

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* safer prefix replacer

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* comment

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* drop conf resolve

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* typo

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* fix

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

---------

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: jxin <jxin@nvidia.com>
* Add deploy triton and query scripts

Signed-off-by: Onur Yilmaz <oyilmaz@nvidia.com>

* Update scripts based on reviews

Signed-off-by: Onur Yilmaz <oyilmaz@nvidia.com>

---------

Signed-off-by: Onur Yilmaz <oyilmaz@nvidia.com>
Signed-off-by: jxin <jxin@nvidia.com>
Signed-off-by: jiemingz <jiemingz@nvidia.com>
Co-authored-by: jiemingz <jiemingz@nvidia.com>
Co-authored-by: Tim Moon <4406448+timmoon10@users.noreply.github.com>
Signed-off-by: jxin <jxin@nvidia.com>
* Enable DGRAD RS overlap

Signed-off-by: Jaemin Choi <jaeminc@nvidia.com>

* Support cases where TE version is new but NeMo/MCore is not

Signed-off-by: Jaemin Choi <jaeminc@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Clean up syntax

Signed-off-by: Jaemin Choi <jaeminc@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Jaemin Choi <jaeminc@nvidia.com>
Co-authored-by: Jaemin Choi <jaeminc@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: jxin <jxin@nvidia.com>
Signed-off-by: Chen Cui <chcui@nvidia.com>
Signed-off-by: jxin <jxin@nvidia.com>
* add mcore dataset updates

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix mcore import

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* revert config

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* update mcore installation

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* update mcore installation

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* revert config

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update apex, TE & PyT

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* setup pythonpath for mcore

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* add mcore to python path

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* add mcore to pythonpath

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* update pythonpath for mcore

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* change pythonpath for mcore

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* update mcore pythonpath

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* update mcore pythonpath

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* revert mcore ds changes

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* revert config

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* add qk_layernorm support for Falcon self attn submodule

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* code style changes

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* add nemo implementation for get_gpt_layer_ammo_spec

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix typo

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* skip Llama2 - INT8 SQ test

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* skip Llama2 - INT8 SQ test

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* comment out NeMo PTQ test

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* bert mcore updates

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add qk_layernorm support for bert's self attention submodule

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* add qk_layernorm support for bert's self attn submodule

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change mcore commit

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* switch back to mcore original

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* bugfix

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* update TE

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* change legacy model to mcore based model for lora

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* remove unnecessary files

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* update mcore commit

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* uncomment PTQ tests

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* remove sbert

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* switch back to mcore main

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* remove unused variable

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* comment out CUDA Graph test

Signed-off-by: dimapihtar <dpihtar@gmail.com>

---------

Signed-off-by: dimapihtar <dpihtar@gmail.com>
Signed-off-by: Dmytro Pykhtar <37850217+dimapihtar@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tim Moon <4406448+timmoon10@users.noreply.github.com>
Co-authored-by: Pablo Garay <palenq@gmail.com>
Signed-off-by: jxin <jxin@nvidia.com>
* Use Label-Looping algorithm for RNN-T decoding by default
* Fix loop labels + stateless decoding

---------

Signed-off-by: Vladimir Bataev <vbataev@nvidia.com>
Signed-off-by: jxin <jxin@nvidia.com>
Signed-off-by: jxin <jxin@nvidia.com>
* Fix packed seq doc math rendering issue

Signed-off-by: Chen Cui <chcui@nvidia.com>

* Fix packed seq doc math rendering issue

Signed-off-by: Chen Cui <chcui@nvidia.com>

---------

Signed-off-by: Chen Cui <chcui@nvidia.com>
Signed-off-by: jxin <jxin@nvidia.com>
* Move logic for FP32 embedding grads to models

Signed-off-by: Tim Moon <tmoon@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <4406448+timmoon10@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Signed-off-by: jxin <jxin@nvidia.com>
* add none check

Signed-off-by: Nithin Rao Koluguri <nithinraok>

* add for restore func

Signed-off-by: Nithin Rao Koluguri <nithinraok>

---------

Signed-off-by: Nithin Rao Koluguri <nithinraok>
Co-authored-by: Nithin Rao Koluguri <nithinraok>
Signed-off-by: jxin <jxin@nvidia.com>
Copy link
Collaborator

@akoumpa akoumpa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add a test for the fake_initialize_model_parallel function? It doesn't have to be 1000 lines, just go one by one on the parameters (after world-size and rank) and test them. Thank you.

@ftxj
Copy link
Contributor Author

ftxj commented Apr 12, 2024

Will reopen with another branch soon.

@ftxj ftxj closed this Apr 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet