Enable NeMo importer and loading dist CKPT for training #11927
Enable NeMo importer and loading dist CKPT for training #11927Victor49152 merged 92 commits intomainfrom
Conversation
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
…oising loop. Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
…anm/diffusion # Conflicts: # nemo/collections/diffusion/flux/pipeline.py
# Conflicts: # nemo/collections/diffusion/__init__.py # nemo/collections/diffusion/vae/__init__.py
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
# Conflicts: # nemo/collections/diffusion/vae/autoencoder.py # nemo/lightning/megatron_parallel.py # scripts/dit/dit_train.py
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
…re incompatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
…_dict # Conflicts: # nemo/collections/diffusion/data/diffusion_mock_datamodule.py # nemo/collections/diffusion/models/flux/model.py # nemo/collections/llm/peft/api.py # nemo/lightning/_strategy_lib.py # nemo/lightning/megatron_parallel.py # scripts/dit/dit_train.py # scripts/flux/flux_controlnet_infer.py # scripts/flux/flux_controlnet_training.py # scripts/flux/flux_infer.py # scripts/flux/flux_training.py
Signed-off-by: mingyuanm <mingyuanm@nvidia.com>
Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com>
|
beep boop 🤖: 🙏 The following files have warnings. In case you are familiar with these, please try helping us to improve the code base. Your code was analyzed with PyLint. The following annotations have been identified: Mitigation guide:
By applying these rules, we reduce the occurance of this message in future. Thank you for improving NeMo's documentation! |
1 similar comment
|
beep boop 🤖: 🙏 The following files have warnings. In case you are familiar with these, please try helping us to improve the code base. Your code was analyzed with PyLint. The following annotations have been identified: Mitigation guide:
By applying these rules, we reduce the occurance of this message in future. Thank you for improving NeMo's documentation! |
|
[🤖]: Hi @Victor49152 👋, We wanted to let you know that a CICD pipeline for this PR just finished successfully So it might be time to merge this PR or get some approvals I'm just a bot so I'll leave it you what to do next. //cc @pablo-garay @ko3n1g |
* Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux model added. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Copying FlowMatchEulerScheduler over Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * WIP: Start to test the pipeline forward pass Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference pipeline runs with offloading function Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Start to test image generation Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Decoding with VAE part has been verified. Still need to check the denoising loop. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * The inference pipeline is verified. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add arg parsers and refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Tested on multi batch sizes and prompts. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Move shceduler to sampler folder Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Merging folders. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Tested after path changing. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Move MMDIT block to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add joint attention and single attention to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Joint attention updated Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Remove redundant importing Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor to inherit megatron module Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Adding mockdata Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * DDP training works Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux controlnet training components while not tested yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux training with DDP tested on 1 GPU Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux and controlnet now could train on precached mode. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Custom FSDP path added to megatron parallel. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bug fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Typo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bypass the no grad issue when no single layers exists Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Let the flux model's dtype autocast before FSDP wrapping * fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..." * Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * single block attention.linear_proj.bias must not require grads after refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * use cpu initialization to avoid OOM Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Set up flux training script with tp Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * SDXL fid image generation script updated. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Mcore self attention API changed Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add a dummy task encoder for raw image inputs Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Support loading crudedataset via energon dataloader Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Default save last to True Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference pipeline Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference script Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image resize mode update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Remove unnecessary bias to avoid sharding issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Handle MCore custom fsdp checkpoint load (#11621) * general handle custom_fsdp checkpoint load * Apply isort and black reformatting Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> * Checkpoint naming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger WIP Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger works fine Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * save hint and output to image logger. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update flux controlnet training step Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add model connector and try to load from dist ckpt but failed. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Renaming and refactoring submodel configs for nemo run compatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Nemo run script works for basic testing recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added tp2 training factory Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added convergence recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux training scripts Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Controlnet inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Moving scripts to correct folder and modify headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Doc strings update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * pylint correction Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add import guard since custom fsdp is not merged to mcore yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add copy right headers and correct code check Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Sharded state dict method tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Improve hf ckpt converting and saving logic Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update recipes Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add notebook Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> --------- Signed-off-by: mingyuanm <mingyuanm@nvidia.com> Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com> Co-authored-by: jianbinc <shjwudp@gmail.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> Signed-off-by: Parth Mannan <pmannan@nvidia.com>
* Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux model added. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Copying FlowMatchEulerScheduler over Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * WIP: Start to test the pipeline forward pass Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference pipeline runs with offloading function Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Start to test image generation Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Decoding with VAE part has been verified. Still need to check the denoising loop. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * The inference pipeline is verified. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add arg parsers and refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Tested on multi batch sizes and prompts. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Move shceduler to sampler folder Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Merging folders. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Tested after path changing. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Move MMDIT block to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add joint attention and single attention to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Joint attention updated Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Remove redundant importing Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor to inherit megatron module Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Adding mockdata Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * DDP training works Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux controlnet training components while not tested yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux training with DDP tested on 1 GPU Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux and controlnet now could train on precached mode. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Custom FSDP path added to megatron parallel. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bug fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Typo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bypass the no grad issue when no single layers exists Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Let the flux model's dtype autocast before FSDP wrapping * fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..." * Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * single block attention.linear_proj.bias must not require grads after refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * use cpu initialization to avoid OOM Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Set up flux training script with tp Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * SDXL fid image generation script updated. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Mcore self attention API changed Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add a dummy task encoder for raw image inputs Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Support loading crudedataset via energon dataloader Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Default save last to True Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference pipeline Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference script Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image resize mode update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Remove unnecessary bias to avoid sharding issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Handle MCore custom fsdp checkpoint load (#11621) * general handle custom_fsdp checkpoint load * Apply isort and black reformatting Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> * Checkpoint naming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger WIP Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger works fine Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * save hint and output to image logger. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update flux controlnet training step Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add model connector and try to load from dist ckpt but failed. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Renaming and refactoring submodel configs for nemo run compatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Nemo run script works for basic testing recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added tp2 training factory Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added convergence recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux training scripts Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Controlnet inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Moving scripts to correct folder and modify headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Doc strings update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * pylint correction Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add import guard since custom fsdp is not merged to mcore yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add copy right headers and correct code check Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Sharded state dict method tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Improve hf ckpt converting and saving logic Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update recipes Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add notebook Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> --------- Signed-off-by: mingyuanm <mingyuanm@nvidia.com> Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com> Co-authored-by: jianbinc <shjwudp@gmail.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> Signed-off-by: Abhinav Garg <abhgarg@nvidia.com>
…11927) * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux model added. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Copying FlowMatchEulerScheduler over Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * WIP: Start to test the pipeline forward pass Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference pipeline runs with offloading function Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Start to test image generation Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Decoding with VAE part has been verified. Still need to check the denoising loop. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * The inference pipeline is verified. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add arg parsers and refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Tested on multi batch sizes and prompts. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Move shceduler to sampler folder Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Merging folders. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Tested after path changing. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Move MMDIT block to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add joint attention and single attention to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Joint attention updated Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Remove redundant importing Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor to inherit megatron module Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Adding mockdata Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * DDP training works Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux controlnet training components while not tested yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux training with DDP tested on 1 GPU Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux and controlnet now could train on precached mode. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Custom FSDP path added to megatron parallel. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bug fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Typo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bypass the no grad issue when no single layers exists Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Let the flux model's dtype autocast before FSDP wrapping * fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..." * Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * single block attention.linear_proj.bias must not require grads after refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * use cpu initialization to avoid OOM Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Set up flux training script with tp Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * SDXL fid image generation script updated. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Mcore self attention API changed Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add a dummy task encoder for raw image inputs Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Support loading crudedataset via energon dataloader Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Default save last to True Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference pipeline Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference script Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image resize mode update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Remove unnecessary bias to avoid sharding issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Handle MCore custom fsdp checkpoint load (NVIDIA-NeMo#11621) * general handle custom_fsdp checkpoint load * Apply isort and black reformatting Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> * Checkpoint naming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger WIP Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger works fine Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * save hint and output to image logger. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update flux controlnet training step Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add model connector and try to load from dist ckpt but failed. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Renaming and refactoring submodel configs for nemo run compatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Nemo run script works for basic testing recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added tp2 training factory Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added convergence recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux training scripts Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Controlnet inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Moving scripts to correct folder and modify headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Doc strings update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * pylint correction Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add import guard since custom fsdp is not merged to mcore yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add copy right headers and correct code check Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Sharded state dict method tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Improve hf ckpt converting and saving logic Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update recipes Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add notebook Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> --------- Signed-off-by: mingyuanm <mingyuanm@nvidia.com> Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com> Co-authored-by: jianbinc <shjwudp@gmail.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
What does this PR do ?
Add NeMo importer to convert HF CKPT to Dist CKPT
Enable loading CKPT for TP > 1 by implementing sharded_state_dict method
Collection:
diffusion
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information