Skip to content

Handle MCore custom fsdp checkpoint load#11621

Merged
Victor49152 merged 3 commits intoNVIDIA-NeMo:mingyuanm/flux_controlnetfrom
shjwudp:flux_controlnet
Dec 17, 2024
Merged

Handle MCore custom fsdp checkpoint load#11621
Victor49152 merged 3 commits intoNVIDIA-NeMo:mingyuanm/flux_controlnetfrom
shjwudp:flux_controlnet

Conversation

@shjwudp
Copy link
Contributor

@shjwudp shjwudp commented Dec 17, 2024

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Collection: [Note which collection this PR will affect]

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

shjwudp and others added 2 commits December 17, 2024 11:26
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
@shjwudp shjwudp changed the title general handle custom_fsdp checkpoint load Handle MCore custom fsdp checkpoint load Dec 17, 2024
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
@github-actions
Copy link
Contributor

beep boop 🤖: 🚨 The following files must be fixed before merge!


Your code was analyzed with PyLint. The following annotations have been identified:

************* Module nemo.collections.diffusion.flux_controlnet_infer
nemo/collections/diffusion/flux_controlnet_infer.py:28:0: C0301: Line too long (188/119) (line-too-long)
nemo/collections/diffusion/flux_controlnet_infer.py:26:0: C0116: Missing function or method docstring (missing-function-docstring)

-----------------------------------
Your code has been rated at 9.59/10

Thank you for improving NeMo's documentation!

@github-actions
Copy link
Contributor

beep boop 🤖: 🙏 The following files have warnings. In case you are familiar with these, please try helping us to improve the code base.


Your code was analyzed with PyLint. The following annotations have been identified:

************* Module nemo.collections.diffusion.flux_controlnet_training
nemo/collections/diffusion/flux_controlnet_training.py:36:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/flux_controlnet_training.py:18:0: W0611: Unused torch.nn imported as nn (unused-import)
nemo/collections/diffusion/flux_controlnet_training.py:22:0: W0611: Unused AutoProcessor imported from transformers (unused-import)
nemo/collections/diffusion/flux_controlnet_training.py:30:0: W0611: Unused Utils imported from nemo.collections.diffusion.utils.mcore_parallel_utils (unused-import)
************* Module nemo.collections.diffusion.flux_training
nemo/collections/diffusion/flux_training.py:36:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/flux_training.py:18:0: W0611: Unused torch.nn imported as nn (unused-import)
nemo/collections/diffusion/flux_training.py:22:0: W0611: Unused AutoProcessor imported from transformers (unused-import)
nemo/collections/diffusion/flux_training.py:30:0: W0611: Unused Utils imported from nemo.collections.diffusion.utils.mcore_parallel_utils (unused-import)
************* Module nemo.collections.diffusion.models.flux.model
nemo/collections/diffusion/models/flux/model.py:50:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:71:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux/model.py:95:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:101:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux/model.py:110:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux/model.py:159:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:224:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux/model.py:246:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:261:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:264:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:280:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:297:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:300:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:303:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:307:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:312:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:362:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:395:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:486:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:490:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:497:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/model.py:42:0: W0611: Unused megatron_parallel imported from nemo.lightning as mp (unused-import)
************* Module nemo.collections.diffusion.models.flux.pipeline
nemo/collections/diffusion/models/flux/pipeline.py:65:0: C0301: Line too long (160/119) (line-too-long)
nemo/collections/diffusion/models/flux/pipeline.py:373:0: C0301: Line too long (165/119) (line-too-long)
nemo/collections/diffusion/models/flux/pipeline.py:382:0: C0301: Line too long (171/119) (line-too-long)
nemo/collections/diffusion/models/flux/pipeline.py:36:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux/pipeline.py:51:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/pipeline.py:69:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/pipeline.py:157:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/pipeline.py:222:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/pipeline.py:227:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/pipeline.py:346:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux/pipeline.py:386:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/pipeline.py:394:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux/pipeline.py:401:4: C0116: Missing function or method docstring (missing-function-docstring)
************* Module layers
nemo/collections/diffusion/models/flux_controlnet/layers.py:53:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux_controlnet/layers.py:17:0: W0611: Unused import torch (unused-import)
************* Module model
nemo/collections/diffusion/models/flux_controlnet/model.py:244:0: C0301: Line too long (120/119) (line-too-long)
nemo/collections/diffusion/models/flux_controlnet/model.py:24:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux_controlnet/model.py:30:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux_controlnet/model.py:42:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux_controlnet/model.py:69:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux_controlnet/model.py:128:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux_controlnet/model.py:137:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux_controlnet/model.py:217:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux_controlnet/model.py:230:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux_controlnet/model.py:2:0: W0611: Unused List imported from typing (unused-import)
nemo/collections/diffusion/models/flux_controlnet/model.py:4:0: W0611: Unused numpy imported as np (unused-import)
nemo/collections/diffusion/models/flux_controlnet/model.py:11:0: W0611: Unused AdaLNContinuous imported from nemo.collections.diffusion.models.dit.dit_layer_spec (unused-import)
************* Module pipeline
nemo/collections/diffusion/models/flux_controlnet/pipeline.py:52:0: C0301: Line too long (159/119) (line-too-long)
nemo/collections/diffusion/models/flux_controlnet/pipeline.py:17:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/diffusion/models/flux_controlnet/pipeline.py:37:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux_controlnet/pipeline.py:56:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux_controlnet/pipeline.py:83:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux_controlnet/pipeline.py:158:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux_controlnet/pipeline.py:204:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux_controlnet/pipeline.py:209:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux_controlnet/pipeline.py:212:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/diffusion/models/flux_controlnet/pipeline.py:1:0: W0611: Unused Any imported from typing (unused-import)
nemo/collections/diffusion/models/flux_controlnet/pipeline.py:1:0: W0611: Unused Callable imported from typing (unused-import)
nemo/collections/diffusion/models/flux_controlnet/pipeline.py:1:0: W0611: Unused Tuple imported from typing (unused-import)
************* Module nemo.collections.diffusion.utils.flux_pipeline_utils
nemo/collections/diffusion/utils/flux_pipeline_utils.py:15:0: W0611: Unused dataclass imported from dataclasses (unused-import)
nemo/collections/diffusion/utils/flux_pipeline_utils.py:16:0: W0611: Unused Callable imported from typing (unused-import)
nemo/collections/diffusion/utils/flux_pipeline_utils.py:18:0: W0611: Unused import torch (unused-import)
nemo/collections/diffusion/utils/flux_pipeline_utils.py:19:0: W0611: Unused TransformerConfig imported from megatron.core.transformer.transformer_config (unused-import)
nemo/collections/diffusion/utils/flux_pipeline_utils.py:24:0: W0611: Unused io imported from nemo.lightning (unused-import)
************* Module nemo.lightning._strategy_lib
nemo/lightning/_strategy_lib.py:574:0: C0301: Line too long (130/119) (line-too-long)
nemo/lightning/_strategy_lib.py:35:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/_strategy_lib.py:36:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/_strategy_lib.py:139:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/_strategy_lib.py:166:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/_strategy_lib.py:202:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/_strategy_lib.py:515:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/_strategy_lib.py:599:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/_strategy_lib.py:612:4: C0115: Missing class docstring (missing-class-docstring)
************* Module nemo.lightning.megatron_parallel
nemo/lightning/megatron_parallel.py:240:0: C0301: Line too long (127/119) (line-too-long)
nemo/lightning/megatron_parallel.py:241:0: C0301: Line too long (140/119) (line-too-long)
nemo/lightning/megatron_parallel.py:242:0: C0301: Line too long (130/119) (line-too-long)
nemo/lightning/megatron_parallel.py:551:0: C0301: Line too long (129/119) (line-too-long)
nemo/lightning/megatron_parallel.py:558:0: C0301: Line too long (135/119) (line-too-long)
nemo/lightning/megatron_parallel.py:827:0: C0301: Line too long (137/119) (line-too-long)
nemo/lightning/megatron_parallel.py:1057:0: C0301: Line too long (136/119) (line-too-long)
nemo/lightning/megatron_parallel.py:1624:0: C0301: Line too long (128/119) (line-too-long)
nemo/lightning/megatron_parallel.py:1663:0: C0301: Line too long (146/119) (line-too-long)
nemo/lightning/megatron_parallel.py:66:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:67:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:69:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:104:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:108:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:308:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:332:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:358:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:384:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:520:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:566:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:570:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:635:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:671:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:678:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:712:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:720:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:736:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:763:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:775:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:797:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1317:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1492:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1498:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1504:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1508:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1513:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1518:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1546:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1592:8: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1614:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1687:0: C0115: Missing class docstring (missing-class-docstring)
nemo/lightning/megatron_parallel.py:1726:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:1740:0: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/megatron_parallel.py:51:0: W0611: Unused TrainerFn imported from pytorch_lightning.trainer.states (unused-import)

-----------------------------------
Your code has been rated at 9.36/10

Thank you for improving NeMo's documentation!

@Victor49152 Victor49152 merged commit 78eed47 into NVIDIA-NeMo:mingyuanm/flux_controlnet Dec 17, 2024
Victor49152 added a commit that referenced this pull request Jan 21, 2025
* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux model added.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Copying FlowMatchEulerScheduler over

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* WIP: Start to test the pipeline forward pass

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference pipeline runs with offloading function

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Start to test image generation

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Decoding with VAE part has been verified. Still need to check the denoising loop.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* The inference pipeline is verified.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add arg parsers and refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Tested on multi batch sizes and prompts.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Move shceduler to sampler folder

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Merging folders.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Tested after path changing.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Move MMDIT block to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add joint attention and single attention to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Joint attention updated

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Remove redundant importing

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor to inherit megatron module

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Adding mockdata

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* DDP training works

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux controlnet training components while not tested yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux training with DDP tested on 1 GPU

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux and controlnet now could train on precached mode.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Custom FSDP path added to megatron parallel.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bug fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Typo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bypass the no grad issue when no single layers exists

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Let the flux model's dtype autocast before FSDP wrapping

* fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..."

* Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* single block attention.linear_proj.bias must not require grads after refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* use cpu initialization to avoid OOM

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Set up flux training script with tp

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* SDXL fid image generation script updated.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Mcore self attention API changed

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add a dummy task encoder for raw image inputs

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Support loading crudedataset via energon dataloader

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Default save last to True

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference pipeline

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference script

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image resize mode update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Remove unnecessary bias to avoid sharding issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Handle MCore custom fsdp checkpoint load (#11621)

* general handle custom_fsdp checkpoint load

* Apply isort and black reformatting

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

---------

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>

* Checkpoint naming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger WIP

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger works fine

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* save hint and output to image logger.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update flux controlnet training step

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add model connector and try to load from dist ckpt but failed.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Renaming and refactoring submodel configs for nemo run compatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Nemo run script works for basic testing recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added tp2 training factory

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added convergence recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux training scripts

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Controlnet inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Moving scripts to correct folder and modify headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Doc strings update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* pylint correction

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add import guard since custom fsdp is not merged to mcore yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add copy right headers and correct code check

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Code Scan

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Minor fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

---------

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com>
Co-authored-by: jianbinc <shjwudp@gmail.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
Victor49152 added a commit that referenced this pull request Jan 23, 2025
* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux model added.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Copying FlowMatchEulerScheduler over

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* WIP: Start to test the pipeline forward pass

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference pipeline runs with offloading function

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Start to test image generation

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Decoding with VAE part has been verified. Still need to check the denoising loop.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* The inference pipeline is verified.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add arg parsers and refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Tested on multi batch sizes and prompts.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Move shceduler to sampler folder

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Merging folders.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Tested after path changing.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Move MMDIT block to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add joint attention and single attention to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Joint attention updated

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Remove redundant importing

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor to inherit megatron module

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Adding mockdata

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* DDP training works

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux controlnet training components while not tested yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux training with DDP tested on 1 GPU

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux and controlnet now could train on precached mode.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Custom FSDP path added to megatron parallel.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bug fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Typo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bypass the no grad issue when no single layers exists

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Let the flux model's dtype autocast before FSDP wrapping

* fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..."

* Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* single block attention.linear_proj.bias must not require grads after refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* use cpu initialization to avoid OOM

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Set up flux training script with tp

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* SDXL fid image generation script updated.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Mcore self attention API changed

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add a dummy task encoder for raw image inputs

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Support loading crudedataset via energon dataloader

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Default save last to True

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference pipeline

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference script

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image resize mode update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Remove unnecessary bias to avoid sharding issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Handle MCore custom fsdp checkpoint load (#11621)

* general handle custom_fsdp checkpoint load

* Apply isort and black reformatting

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

---------

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>

* Checkpoint naming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger WIP

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger works fine

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* save hint and output to image logger.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update flux controlnet training step

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add model connector and try to load from dist ckpt but failed.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Renaming and refactoring submodel configs for nemo run compatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Nemo run script works for basic testing recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added tp2 training factory

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added convergence recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux training scripts

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Controlnet inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Moving scripts to correct folder and modify headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Doc strings update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* pylint correction

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add import guard since custom fsdp is not merged to mcore yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add copy right headers and correct code check

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Sharded state dict method tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Improve hf ckpt converting and saving logic

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update recipes

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add notebook

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

---------

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com>
Co-authored-by: jianbinc <shjwudp@gmail.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
parthmannan pushed a commit that referenced this pull request Jan 28, 2025
* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux model added.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Copying FlowMatchEulerScheduler over

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* WIP: Start to test the pipeline forward pass

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference pipeline runs with offloading function

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Start to test image generation

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Decoding with VAE part has been verified. Still need to check the denoising loop.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* The inference pipeline is verified.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add arg parsers and refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Tested on multi batch sizes and prompts.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Move shceduler to sampler folder

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Merging folders.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Tested after path changing.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Move MMDIT block to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add joint attention and single attention to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Joint attention updated

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Remove redundant importing

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor to inherit megatron module

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Adding mockdata

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* DDP training works

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux controlnet training components while not tested yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux training with DDP tested on 1 GPU

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux and controlnet now could train on precached mode.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Custom FSDP path added to megatron parallel.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bug fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Typo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bypass the no grad issue when no single layers exists

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Let the flux model's dtype autocast before FSDP wrapping

* fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..."

* Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* single block attention.linear_proj.bias must not require grads after refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* use cpu initialization to avoid OOM

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Set up flux training script with tp

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* SDXL fid image generation script updated.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Mcore self attention API changed

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add a dummy task encoder for raw image inputs

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Support loading crudedataset via energon dataloader

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Default save last to True

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference pipeline

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference script

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image resize mode update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Remove unnecessary bias to avoid sharding issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Handle MCore custom fsdp checkpoint load (#11621)

* general handle custom_fsdp checkpoint load

* Apply isort and black reformatting

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

---------

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>

* Checkpoint naming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger WIP

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger works fine

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* save hint and output to image logger.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update flux controlnet training step

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add model connector and try to load from dist ckpt but failed.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Renaming and refactoring submodel configs for nemo run compatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Nemo run script works for basic testing recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added tp2 training factory

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added convergence recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux training scripts

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Controlnet inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Moving scripts to correct folder and modify headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Doc strings update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* pylint correction

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add import guard since custom fsdp is not merged to mcore yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add copy right headers and correct code check

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Sharded state dict method tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Improve hf ckpt converting and saving logic

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update recipes

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add notebook

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

---------

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com>
Co-authored-by: jianbinc <shjwudp@gmail.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
Signed-off-by: Parth Mannan <pmannan@nvidia.com>
abhinavg4 pushed a commit that referenced this pull request Jan 30, 2025
* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux model added.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Copying FlowMatchEulerScheduler over

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* WIP: Start to test the pipeline forward pass

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference pipeline runs with offloading function

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Start to test image generation

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Decoding with VAE part has been verified. Still need to check the denoising loop.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* The inference pipeline is verified.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add arg parsers and refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Tested on multi batch sizes and prompts.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Move shceduler to sampler folder

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Merging folders.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Tested after path changing.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Move MMDIT block to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add joint attention and single attention to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Joint attention updated

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Remove redundant importing

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor to inherit megatron module

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Adding mockdata

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* DDP training works

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux controlnet training components while not tested yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux training with DDP tested on 1 GPU

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux and controlnet now could train on precached mode.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Custom FSDP path added to megatron parallel.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bug fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Typo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bypass the no grad issue when no single layers exists

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Let the flux model's dtype autocast before FSDP wrapping

* fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..."

* Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* single block attention.linear_proj.bias must not require grads after refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* use cpu initialization to avoid OOM

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Set up flux training script with tp

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* SDXL fid image generation script updated.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Mcore self attention API changed

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add a dummy task encoder for raw image inputs

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Support loading crudedataset via energon dataloader

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Default save last to True

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference pipeline

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference script

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image resize mode update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Remove unnecessary bias to avoid sharding issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Handle MCore custom fsdp checkpoint load (#11621)

* general handle custom_fsdp checkpoint load

* Apply isort and black reformatting

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

---------

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>

* Checkpoint naming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger WIP

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger works fine

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* save hint and output to image logger.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update flux controlnet training step

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add model connector and try to load from dist ckpt but failed.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Renaming and refactoring submodel configs for nemo run compatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Nemo run script works for basic testing recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added tp2 training factory

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added convergence recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux training scripts

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Controlnet inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Moving scripts to correct folder and modify headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Doc strings update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* pylint correction

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add import guard since custom fsdp is not merged to mcore yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add copy right headers and correct code check

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Code Scan

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Minor fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

---------

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com>
Co-authored-by: jianbinc <shjwudp@gmail.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
Signed-off-by: Abhinav Garg <abhgarg@nvidia.com>
abhinavg4 pushed a commit that referenced this pull request Jan 30, 2025
* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux model added.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Copying FlowMatchEulerScheduler over

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* WIP: Start to test the pipeline forward pass

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference pipeline runs with offloading function

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Start to test image generation

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Decoding with VAE part has been verified. Still need to check the denoising loop.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* The inference pipeline is verified.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add arg parsers and refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Tested on multi batch sizes and prompts.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Move shceduler to sampler folder

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Merging folders.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Tested after path changing.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Move MMDIT block to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add joint attention and single attention to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Joint attention updated

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Remove redundant importing

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor to inherit megatron module

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Adding mockdata

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* DDP training works

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux controlnet training components while not tested yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux training with DDP tested on 1 GPU

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux and controlnet now could train on precached mode.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Custom FSDP path added to megatron parallel.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bug fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Typo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bypass the no grad issue when no single layers exists

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Let the flux model's dtype autocast before FSDP wrapping

* fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..."

* Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* single block attention.linear_proj.bias must not require grads after refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* use cpu initialization to avoid OOM

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Set up flux training script with tp

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* SDXL fid image generation script updated.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Mcore self attention API changed

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add a dummy task encoder for raw image inputs

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Support loading crudedataset via energon dataloader

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Default save last to True

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference pipeline

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference script

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image resize mode update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Remove unnecessary bias to avoid sharding issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Handle MCore custom fsdp checkpoint load (#11621)

* general handle custom_fsdp checkpoint load

* Apply isort and black reformatting

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

---------

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>

* Checkpoint naming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger WIP

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger works fine

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* save hint and output to image logger.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update flux controlnet training step

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add model connector and try to load from dist ckpt but failed.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Renaming and refactoring submodel configs for nemo run compatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Nemo run script works for basic testing recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added tp2 training factory

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added convergence recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux training scripts

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Controlnet inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Moving scripts to correct folder and modify headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Doc strings update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* pylint correction

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add import guard since custom fsdp is not merged to mcore yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add copy right headers and correct code check

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Sharded state dict method tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Improve hf ckpt converting and saving logic

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update recipes

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add notebook

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

---------

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com>
Co-authored-by: jianbinc <shjwudp@gmail.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
Signed-off-by: Abhinav Garg <abhgarg@nvidia.com>
youngeunkwon0405 pushed a commit to youngeunkwon0405/NeMo that referenced this pull request Feb 10, 2025
…#11794)

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux model added.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Copying FlowMatchEulerScheduler over

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* WIP: Start to test the pipeline forward pass

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference pipeline runs with offloading function

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Start to test image generation

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Decoding with VAE part has been verified. Still need to check the denoising loop.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* The inference pipeline is verified.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add arg parsers and refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Tested on multi batch sizes and prompts.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Move shceduler to sampler folder

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Merging folders.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Tested after path changing.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Move MMDIT block to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add joint attention and single attention to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Joint attention updated

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Remove redundant importing

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor to inherit megatron module

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Adding mockdata

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* DDP training works

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux controlnet training components while not tested yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux training with DDP tested on 1 GPU

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux and controlnet now could train on precached mode.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Custom FSDP path added to megatron parallel.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bug fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Typo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bypass the no grad issue when no single layers exists

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Let the flux model's dtype autocast before FSDP wrapping

* fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..."

* Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* single block attention.linear_proj.bias must not require grads after refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* use cpu initialization to avoid OOM

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Set up flux training script with tp

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* SDXL fid image generation script updated.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Mcore self attention API changed

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add a dummy task encoder for raw image inputs

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Support loading crudedataset via energon dataloader

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Default save last to True

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference pipeline

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference script

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image resize mode update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Remove unnecessary bias to avoid sharding issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Handle MCore custom fsdp checkpoint load (NVIDIA-NeMo#11621)

* general handle custom_fsdp checkpoint load

* Apply isort and black reformatting

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

---------

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>

* Checkpoint naming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger WIP

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger works fine

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* save hint and output to image logger.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update flux controlnet training step

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add model connector and try to load from dist ckpt but failed.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Renaming and refactoring submodel configs for nemo run compatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Nemo run script works for basic testing recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added tp2 training factory

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added convergence recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux training scripts

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Controlnet inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Moving scripts to correct folder and modify headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Doc strings update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* pylint correction

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add import guard since custom fsdp is not merged to mcore yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add copy right headers and correct code check

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Code Scan

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Minor fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

---------

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com>
Co-authored-by: jianbinc <shjwudp@gmail.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
youngeunkwon0405 pushed a commit to youngeunkwon0405/NeMo that referenced this pull request Feb 10, 2025
…11927)

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux model added.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Copying FlowMatchEulerScheduler over

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* WIP: Start to test the pipeline forward pass

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference pipeline runs with offloading function

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Start to test image generation

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Decoding with VAE part has been verified. Still need to check the denoising loop.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* The inference pipeline is verified.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add arg parsers and refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Tested on multi batch sizes and prompts.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Move shceduler to sampler folder

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Merging folders.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Tested after path changing.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Move MMDIT block to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add joint attention and single attention to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Joint attention updated

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Remove redundant importing

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor to inherit megatron module

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Adding mockdata

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* DDP training works

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux controlnet training components while not tested yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux training with DDP tested on 1 GPU

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux and controlnet now could train on precached mode.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Custom FSDP path added to megatron parallel.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bug fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Typo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bypass the no grad issue when no single layers exists

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Let the flux model's dtype autocast before FSDP wrapping

* fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..."

* Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* single block attention.linear_proj.bias must not require grads after refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* use cpu initialization to avoid OOM

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Set up flux training script with tp

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* SDXL fid image generation script updated.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Mcore self attention API changed

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add a dummy task encoder for raw image inputs

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Support loading crudedataset via energon dataloader

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Default save last to True

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference pipeline

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference script

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image resize mode update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Remove unnecessary bias to avoid sharding issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Handle MCore custom fsdp checkpoint load (NVIDIA-NeMo#11621)

* general handle custom_fsdp checkpoint load

* Apply isort and black reformatting

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

---------

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>

* Checkpoint naming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger WIP

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger works fine

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* save hint and output to image logger.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update flux controlnet training step

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add model connector and try to load from dist ckpt but failed.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Renaming and refactoring submodel configs for nemo run compatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Nemo run script works for basic testing recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added tp2 training factory

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added convergence recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux training scripts

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Controlnet inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Moving scripts to correct folder and modify headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Doc strings update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* pylint correction

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add import guard since custom fsdp is not merged to mcore yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add copy right headers and correct code check

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Sharded state dict method tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Improve hf ckpt converting and saving logic

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update recipes

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add notebook

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

---------

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com>
Co-authored-by: jianbinc <shjwudp@gmail.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Victor49152 added a commit that referenced this pull request Feb 28, 2025
* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux model added.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Copying FlowMatchEulerScheduler over

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* WIP: Start to test the pipeline forward pass

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference pipeline runs with offloading function

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Start to test image generation

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Decoding with VAE part has been verified. Still need to check the denoising loop.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* The inference pipeline is verified.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add arg parsers and refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Tested on multi batch sizes and prompts.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Move shceduler to sampler folder

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Merging folders.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Tested after path changing.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Move MMDIT block to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add joint attention and single attention to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Joint attention updated

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Remove redundant importing

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor to inherit megatron module

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Adding mockdata

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* DDP training works

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux controlnet training components while not tested yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux training with DDP tested on 1 GPU

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux and controlnet now could train on precached mode.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Custom FSDP path added to megatron parallel.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bug fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Typo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bypass the no grad issue when no single layers exists

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Let the flux model's dtype autocast before FSDP wrapping

* fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..."

* Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* single block attention.linear_proj.bias must not require grads after refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* use cpu initialization to avoid OOM

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Set up flux training script with tp

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* SDXL fid image generation script updated.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Mcore self attention API changed

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add a dummy task encoder for raw image inputs

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Support loading crudedataset via energon dataloader

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Default save last to True

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference pipeline

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference script

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image resize mode update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Remove unnecessary bias to avoid sharding issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Handle MCore custom fsdp checkpoint load (#11621)

* general handle custom_fsdp checkpoint load

* Apply isort and black reformatting

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

---------

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>

* Checkpoint naming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger WIP

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger works fine

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* save hint and output to image logger.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update flux controlnet training step

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add model connector and try to load from dist ckpt but failed.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Renaming and refactoring submodel configs for nemo run compatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Nemo run script works for basic testing recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added tp2 training factory

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added convergence recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux training scripts

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Controlnet inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Moving scripts to correct folder and modify headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Doc strings update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* pylint correction

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add import guard since custom fsdp is not merged to mcore yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add copy right headers and correct code check

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Sharded state dict method tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Improve hf ckpt converting and saving logic

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update recipes

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add notebook

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add CI recipe file

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor names

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add guard

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* Fix

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* fix known issues

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Add import guard

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix issues importing

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* Update flux_535m.py

Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>

* Adding necessary docstrings

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Pylint fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming and fix tutorial

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Update and test the tutorial

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

---------

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>
Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>
Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
Signed-off-by: Mingyuan Ma <111467530+Victor49152@users.noreply.github.com>
Co-authored-by: mingyuanm <mingyuanm@nvidia.com>
Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com>
Co-authored-by: jianbinc <shjwudp@gmail.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Mingyuan Ma <111467530+Victor49152@users.noreply.github.com>
Co-authored-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>
Agoniii pushed a commit to Agoniii/NeMo that referenced this pull request Mar 6, 2025
* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux model added.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Copying FlowMatchEulerScheduler over

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* WIP: Start to test the pipeline forward pass

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Vae added and matched flux checkpoint

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference pipeline runs with offloading function

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Start to test image generation

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Decoding with VAE part has been verified. Still need to check the denoising loop.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* The inference pipeline is verified.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add arg parsers and refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Tested on multi batch sizes and prompts.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Move shceduler to sampler folder

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Merging folders.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Tested after path changing.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Move MMDIT block to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add joint attention and single attention to NeMo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Joint attention updated

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Remove redundant importing

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor to inherit megatron module

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Adding mockdata

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* DDP training works

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux controlnet training components while not tested yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux training with DDP tested on 1 GPU

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Flux and controlnet now could train on precached mode.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Custom FSDP path added to megatron parallel.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bug fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Typo

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Bypass the no grad issue when no single layers exists

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Let the flux model's dtype autocast before FSDP wrapping

* fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..."

* Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Get rid of concat op in flux single transformer

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* single block attention.linear_proj.bias must not require grads after refactoring

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* use cpu initialization to avoid OOM

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Set up flux training script with tp

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* SDXL fid image generation script updated.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Mcore self attention API changed

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add a dummy task encoder for raw image inputs

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Support loading crudedataset via energon dataloader

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Default save last to True

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference pipeline

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add controlnet inference script

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image resize mode update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Remove unnecessary bias to avoid sharding issue.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Handle MCore custom fsdp checkpoint load (NVIDIA-NeMo#11621)

* general handle custom_fsdp checkpoint load

* Apply isort and black reformatting

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: artbataev <artbataev@users.noreply.github.com>

---------

Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>

* Checkpoint naming

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger WIP

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Image logger works fine

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* save hint and output to image logger.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update flux controlnet training step

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add model connector and try to load from dist ckpt but failed.

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Renaming and refactoring submodel configs for nemo run compatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Nemo run script works for basic testing recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added tp2 training factory

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added convergence recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Added flux training scripts

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Controlnet inference script tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Moving scripts to correct folder and modify headers

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Doc strings update

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* pylint correction

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add import guard since custom fsdp is not merged to mcore yet

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add copy right headers and correct code check

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Sharded state dict method tested

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Improve hf ckpt converting and saving logic

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update recipes

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add notebook

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Add CI recipe file

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Update recipe

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Refactor names

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Add guard

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* Fix

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* fix known issues

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Add import guard

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* fix issues importing

Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>

* Apply isort and black reformatting

Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>

* Update flux_535m.py

Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>

* Adding necessary docstrings

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Pylint fix

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Renaming and fix tutorial

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>

* Update and test the tutorial

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>

---------

Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>
Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>
Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
Signed-off-by: Mingyuan Ma <111467530+Victor49152@users.noreply.github.com>
Co-authored-by: mingyuanm <mingyuanm@nvidia.com>
Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com>
Co-authored-by: jianbinc <shjwudp@gmail.com>
Co-authored-by: shjwudp <shjwudp@users.noreply.github.com>
Co-authored-by: artbataev <artbataev@users.noreply.github.com>
Co-authored-by: Mingyuan Ma <111467530+Victor49152@users.noreply.github.com>
Co-authored-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>
Signed-off-by: Xue Huang <xueh@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants