Skip to content

Enable NeMo importer and loading dist CKPT for training #11927

Merged
Victor49152 merged 92 commits intomainfrom
mingyuanm/flux_controlnet_sharded_dict
Jan 23, 2025
Merged

Enable NeMo importer and loading dist CKPT for training #11927
Victor49152 merged 92 commits intomainfrom
mingyuanm/flux_controlnet_sharded_dict

Conversation

@Victor49152
Copy link
Collaborator

What does this PR do ?

Add NeMo importer to convert HF CKPT to Dist CKPT
Enable loading CKPT for TP > 1 by implementing sharded_state_dict method

Collection:
diffusion

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

Victor49152 and others added 30 commits September 4, 2024 15:25
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
…oising loop.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
…anm/diffusion

# Conflicts:
#	nemo/collections/diffusion/flux/pipeline.py
# Conflicts:
#	nemo/collections/diffusion/__init__.py
#	nemo/collections/diffusion/vae/__init__.py
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Victor49152 and others added 12 commits January 9, 2025 03:47
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
# Conflicts:
#	nemo/collections/diffusion/vae/autoencoder.py
#	nemo/lightning/megatron_parallel.py
#	scripts/dit/dit_train.py
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
…re incompatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
…_dict

# Conflicts:
#	nemo/collections/diffusion/data/diffusion_mock_datamodule.py
#	nemo/collections/diffusion/models/flux/model.py
#	nemo/collections/llm/peft/api.py
#	nemo/lightning/_strategy_lib.py
#	nemo/lightning/megatron_parallel.py
#	scripts/dit/dit_train.py
#	scripts/flux/flux_controlnet_infer.py
#	scripts/flux/flux_controlnet_training.py
#	scripts/flux/flux_infer.py
#	scripts/flux/flux_training.py
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
@Victor49152 Victor49152 added Multi Modal Run CICD PoR Major feature to be highlighted in release notes labels Jan 22, 2025
@github-actions
Copy link
Contributor

beep boop 🤖: 🙏 The following files have warnings. In case you are familiar with these, please try helping us to improve the code base.


Your code was analyzed with PyLint. The following annotations have been identified:

************* Module nemo.collections.diffusion.models.flux.model
nemo/collections/diffusion/models/flux/model.py:753:0: C0301: Line too long (122/119) (line-too-long)
nemo/collections/diffusion/models/flux/model.py:67:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux/model.py:101:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux/model.py:107:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux/model.py:114:0: C0115: Missing class docstring (missing-class-docstring)
************* Module nemo.collections.diffusion.utils.flux_pipeline_utils
nemo/collections/diffusion/utils/flux_pipeline_utils.py:15:0: W0611: Unused dataclass imported from dataclasses (unused-import)
nemo/collections/diffusion/utils/flux_pipeline_utils.py:17:0: W0611: Unused import torch (unused-import)
************* Module nemo.lightning.megatron_parallel
nemo/lightning/megatron_parallel.py:245:0: C0301: Line too long (127/119) (line-too-long)
nemo/lightning/megatron_parallel.py:246:0: C0301: Line too long (140/119) (line-too-long)
nemo/lightning/megatron_parallel.py:247:0: C0301: Line too long (130/119) (line-too-long)
nemo/lightning/megatron_parallel.py:554:0: C0301: Line too long (129/119) (line-too-long)
nemo/lightning/megatron_parallel.py:561:0: C0301: Line too long (135/119) (line-too-long)
nemo/lightning/megatron_parallel.py:849:0: C0301: Line too long (137/119) (line-too-long)
nemo/lightning/megatron_parallel.py:1079:0: C0301: Line too long (136/119) (line-too-long)
nemo/lightning/megatron_parallel.py:1652:0: C0301: Line too long (128/119) (line-too-long)
nemo/lightning/megatron_parallel.py:1691:0: C0301: Line too long (146/119) (line-too-long)
nemo/lightning/megatron_parallel.py:71:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:72:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:74:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:109:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:113:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:313:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:337:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:363:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:389:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:525:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:569:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:573:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:639:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:674:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:680:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:686:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:693:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:700:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:734:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:742:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:758:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:785:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:797:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:819:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1345:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1520:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1526:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1532:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1536:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1541:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1546:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1574:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1620:8: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1642:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1715:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1761:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1775:0: C0116: Missing function or method docstring (missing-function-docstring)

-----------------------------------
Your code has been rated at 9.50/10

Mitigation guide:

  • Add sensible and useful docstrings to functions and methods
  • For trivial methods like getter/setters, consider adding # pylint: disable=C0116 inside the function itself
  • To disable multiple functions/methods at once, put a # pylint: disable=C0116 before the first and a # pylint: enable=C0116 after the last.

By applying these rules, we reduce the occurance of this message in future.

Thank you for improving NeMo's documentation!

1 similar comment
@github-actions
Copy link
Contributor

beep boop 🤖: 🙏 The following files have warnings. In case you are familiar with these, please try helping us to improve the code base.


Your code was analyzed with PyLint. The following annotations have been identified:

************* Module nemo.collections.diffusion.models.flux.model
nemo/collections/diffusion/models/flux/model.py:753:0: C0301: Line too long (122/119) (line-too-long)
nemo/collections/diffusion/models/flux/model.py:67:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux/model.py:101:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux/model.py:107:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux/model.py:114:0: C0115: Missing class docstring (missing-class-docstring)
************* Module nemo.collections.diffusion.utils.flux_pipeline_utils
nemo/collections/diffusion/utils/flux_pipeline_utils.py:15:0: W0611: Unused dataclass imported from dataclasses (unused-import)
nemo/collections/diffusion/utils/flux_pipeline_utils.py:17:0: W0611: Unused import torch (unused-import)
************* Module nemo.lightning.megatron_parallel
nemo/lightning/megatron_parallel.py:245:0: C0301: Line too long (127/119) (line-too-long)
nemo/lightning/megatron_parallel.py:246:0: C0301: Line too long (140/119) (line-too-long)
nemo/lightning/megatron_parallel.py:247:0: C0301: Line too long (130/119) (line-too-long)
nemo/lightning/megatron_parallel.py:554:0: C0301: Line too long (129/119) (line-too-long)
nemo/lightning/megatron_parallel.py:561:0: C0301: Line too long (135/119) (line-too-long)
nemo/lightning/megatron_parallel.py:849:0: C0301: Line too long (137/119) (line-too-long)
nemo/lightning/megatron_parallel.py:1079:0: C0301: Line too long (136/119) (line-too-long)
nemo/lightning/megatron_parallel.py:1652:0: C0301: Line too long (128/119) (line-too-long)
nemo/lightning/megatron_parallel.py:1691:0: C0301: Line too long (146/119) (line-too-long)
nemo/lightning/megatron_parallel.py:71:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:72:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:74:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:109:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:113:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:313:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:337:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:363:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:389:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:525:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:569:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:573:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:639:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:674:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:680:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:686:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:693:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:700:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:734:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:742:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:758:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:785:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:797:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:819:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1345:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1520:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1526:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1532:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1536:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1541:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1546:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1574:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1620:8: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1642:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1715:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1761:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1775:0: C0116: Missing function or method docstring (missing-function-docstring)

-----------------------------------
Your code has been rated at 9.50/10

Mitigation guide:

  • Add sensible and useful docstrings to functions and methods
  • For trivial methods like getter/setters, consider adding # pylint: disable=C0116 inside the function itself
  • To disable multiple functions/methods at once, put a # pylint: disable=C0116 before the first and a # pylint: enable=C0116 after the last.

By applying these rules, we reduce the occurance of this message in future.

Thank you for improving NeMo's documentation!

@github-actions
Copy link
Contributor

[🤖]: Hi @Victor49152 👋,

We wanted to let you know that a CICD pipeline for this PR just finished successfully

So it might be time to merge this PR or get some approvals

I'm just a bot so I'll leave it you what to do next.

//cc @pablo-garay @ko3n1g

@Victor49152 Victor49152 enabled auto-merge (squash) January 23, 2025 02:24
@Victor49152 Victor49152 requested a review from yaoyu-33 January 23, 2025 02:25
@Victor49152 Victor49152 merged commit 6aeef75 into main Jan 23, 2025
211 of 213 checks passed
@Victor49152 Victor49152 deleted the mingyuanm/flux_controlnet_sharded_dict branch January 23, 2025 02:28
parthmannan pushed a commit that referenced this pull request Jan 28, 2025
* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux model added.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Copying FlowMatchEulerScheduler over

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* WIP: Start to test the pipeline forward pass

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference pipeline runs with offloading function

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Start to test image generation

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Decoding with VAE part has been verified. Still need to check the denoising loop.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* The inference pipeline is verified.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add arg parsers and refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Tested on multi batch sizes and prompts.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Move shceduler to sampler folder

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Merging folders.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Tested after path changing.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Move MMDIT block to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add joint attention and single attention to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Joint attention updated

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Remove redundant importing

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor to inherit megatron module

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Adding mockdata

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* DDP training works

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux controlnet training components while not tested yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux training with DDP tested on 1 GPU

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux and controlnet now could train on precached mode.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Custom FSDP path added to megatron parallel.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bug fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Typo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bypass the no grad issue when no single layers exists

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Let the flux model's dtype autocast before FSDP wrapping

* fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..."

* Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* single block attention.linear_proj.bias must not require grads after refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* use cpu initialization to avoid OOM

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Set up flux training script with tp

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* SDXL fid image generation script updated.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Mcore self attention API changed

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add a dummy task encoder for raw image inputs

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Support loading crudedataset via energon dataloader

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Default save last to True

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference pipeline

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference script

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image resize mode update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Remove unnecessary bias to avoid sharding issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Handle MCore custom fsdp checkpoint load (#11621)

* general handle custom_fsdp checkpoint load

* Apply isort and black reformatting

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

---------

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>

* Checkpoint naming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger WIP

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger works fine

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* save hint and output to image logger.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update flux controlnet training step

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add model connector and try to load from dist ckpt but failed.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Renaming and refactoring submodel configs for nemo run compatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Nemo run script works for basic testing recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added tp2 training factory

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added convergence recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux training scripts

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Controlnet inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Moving scripts to correct folder and modify headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Doc strings update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* pylint correction

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add import guard since custom fsdp is not merged to mcore yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add copy right headers and correct code check

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Sharded state dict method tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Improve hf ckpt converting and saving logic

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update recipes

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add notebook

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

---------

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com>
Co-authored-by: jianbinc <shjwudp@gmail.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
Signed-off-by: Parth Mannan <pmannan@nvidia.com>
abhinavg4 pushed a commit that referenced this pull request Jan 30, 2025
* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux model added.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Copying FlowMatchEulerScheduler over

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* WIP: Start to test the pipeline forward pass

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference pipeline runs with offloading function

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Start to test image generation

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Decoding with VAE part has been verified. Still need to check the denoising loop.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* The inference pipeline is verified.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add arg parsers and refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Tested on multi batch sizes and prompts.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Move shceduler to sampler folder

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Merging folders.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Tested after path changing.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Move MMDIT block to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add joint attention and single attention to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Joint attention updated

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Remove redundant importing

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor to inherit megatron module

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Adding mockdata

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* DDP training works

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux controlnet training components while not tested yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux training with DDP tested on 1 GPU

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux and controlnet now could train on precached mode.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Custom FSDP path added to megatron parallel.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bug fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Typo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bypass the no grad issue when no single layers exists

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Let the flux model's dtype autocast before FSDP wrapping

* fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..."

* Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* single block attention.linear_proj.bias must not require grads after refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* use cpu initialization to avoid OOM

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Set up flux training script with tp

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* SDXL fid image generation script updated.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Mcore self attention API changed

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add a dummy task encoder for raw image inputs

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Support loading crudedataset via energon dataloader

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Default save last to True

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference pipeline

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference script

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image resize mode update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Remove unnecessary bias to avoid sharding issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Handle MCore custom fsdp checkpoint load (#11621)

* general handle custom_fsdp checkpoint load

* Apply isort and black reformatting

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

---------

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>

* Checkpoint naming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger WIP

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger works fine

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* save hint and output to image logger.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update flux controlnet training step

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add model connector and try to load from dist ckpt but failed.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Renaming and refactoring submodel configs for nemo run compatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Nemo run script works for basic testing recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added tp2 training factory

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added convergence recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux training scripts

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Controlnet inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Moving scripts to correct folder and modify headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Doc strings update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* pylint correction

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add import guard since custom fsdp is not merged to mcore yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add copy right headers and correct code check

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Sharded state dict method tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Improve hf ckpt converting and saving logic

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update recipes

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add notebook

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

---------

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com>
Co-authored-by: jianbinc <shjwudp@gmail.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
Signed-off-by: Abhinav Garg <abhgarg@nvidia.com>
youngeunkwon0405 pushed a commit to youngeunkwon0405/NeMo that referenced this pull request Feb 10, 2025
…11927)

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux model added.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Copying FlowMatchEulerScheduler over

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* WIP: Start to test the pipeline forward pass

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference pipeline runs with offloading function

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Start to test image generation

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Decoding with VAE part has been verified. Still need to check the denoising loop.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* The inference pipeline is verified.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add arg parsers and refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Tested on multi batch sizes and prompts.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Move shceduler to sampler folder

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Merging folders.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Tested after path changing.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Move MMDIT block to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add joint attention and single attention to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Joint attention updated

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Remove redundant importing

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor to inherit megatron module

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Adding mockdata

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* DDP training works

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux controlnet training components while not tested yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux training with DDP tested on 1 GPU

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux and controlnet now could train on precached mode.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Custom FSDP path added to megatron parallel.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bug fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Typo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bypass the no grad issue when no single layers exists

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Let the flux model's dtype autocast before FSDP wrapping

* fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..."

* Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* single block attention.linear_proj.bias must not require grads after refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* use cpu initialization to avoid OOM

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Set up flux training script with tp

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* SDXL fid image generation script updated.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Mcore self attention API changed

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add a dummy task encoder for raw image inputs

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Support loading crudedataset via energon dataloader

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Default save last to True

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference pipeline

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference script

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image resize mode update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Remove unnecessary bias to avoid sharding issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Handle MCore custom fsdp checkpoint load (NVIDIA-NeMo#11621)

* general handle custom_fsdp checkpoint load

* Apply isort and black reformatting

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

---------

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>

* Checkpoint naming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger WIP

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger works fine

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* save hint and output to image logger.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update flux controlnet training step

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add model connector and try to load from dist ckpt but failed.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Renaming and refactoring submodel configs for nemo run compatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Nemo run script works for basic testing recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added tp2 training factory

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added convergence recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux training scripts

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Controlnet inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Moving scripts to correct folder and modify headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Doc strings update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* pylint correction

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add import guard since custom fsdp is not merged to mcore yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add copy right headers and correct code check

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Sharded state dict method tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Improve hf ckpt converting and saving logic

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update recipes

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add notebook

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

---------

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com>
Co-authored-by: jianbinc <shjwudp@gmail.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

PoR Major feature to be highlighted in release notes Run CICD

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants