Handle MCore custom fsdp checkpoint load#11621
Merged
Victor49152 merged 3 commits intoNVIDIA-NeMo:mingyuanm/flux_controlnetfrom Dec 17, 2024
Merged
Handle MCore custom fsdp checkpoint load#11621Victor49152 merged 3 commits intoNVIDIA-NeMo:mingyuanm/flux_controlnetfrom
Victor49152 merged 3 commits intoNVIDIA-NeMo:mingyuanm/flux_controlnetfrom
Conversation
Signed-off-by: shjwudp <shjwudp@users.noreply.github.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
Contributor
|
beep boop 🤖: 🚨 The following files must be fixed before merge! Your code was analyzed with PyLint. The following annotations have been identified: Thank you for improving NeMo's documentation! |
Contributor
|
beep boop 🤖: 🙏 The following files have warnings. In case you are familiar with these, please try helping us to improve the code base. Your code was analyzed with PyLint. The following annotations have been identified: Thank you for improving NeMo's documentation! |
Victor49152
added a commit
that referenced
this pull request
Jan 21, 2025
* Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux model added. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Copying FlowMatchEulerScheduler over Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * WIP: Start to test the pipeline forward pass Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference pipeline runs with offloading function Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Start to test image generation Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Decoding with VAE part has been verified. Still need to check the denoising loop. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * The inference pipeline is verified. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add arg parsers and refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Tested on multi batch sizes and prompts. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Move shceduler to sampler folder Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Merging folders. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Tested after path changing. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Move MMDIT block to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add joint attention and single attention to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Joint attention updated Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Remove redundant importing Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor to inherit megatron module Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Adding mockdata Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * DDP training works Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux controlnet training components while not tested yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux training with DDP tested on 1 GPU Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux and controlnet now could train on precached mode. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Custom FSDP path added to megatron parallel. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bug fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Typo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bypass the no grad issue when no single layers exists Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Let the flux model's dtype autocast before FSDP wrapping * fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..." * Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * single block attention.linear_proj.bias must not require grads after refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * use cpu initialization to avoid OOM Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Set up flux training script with tp Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * SDXL fid image generation script updated. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Mcore self attention API changed Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add a dummy task encoder for raw image inputs Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Support loading crudedataset via energon dataloader Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Default save last to True Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference pipeline Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference script Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image resize mode update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Remove unnecessary bias to avoid sharding issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Handle MCore custom fsdp checkpoint load (#11621) * general handle custom_fsdp checkpoint load * Apply isort and black reformatting Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> * Checkpoint naming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger WIP Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger works fine Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * save hint and output to image logger. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update flux controlnet training step Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add model connector and try to load from dist ckpt but failed. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Renaming and refactoring submodel configs for nemo run compatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Nemo run script works for basic testing recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added tp2 training factory Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added convergence recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux training scripts Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Controlnet inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Moving scripts to correct folder and modify headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Doc strings update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * pylint correction Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add import guard since custom fsdp is not merged to mcore yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add copy right headers and correct code check Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Code Scan Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Minor fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> --------- Signed-off-by: mingyuanm <mingyuanm@nvidia.com> Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com> Co-authored-by: jianbinc <shjwudp@gmail.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com>
Victor49152
added a commit
that referenced
this pull request
Jan 23, 2025
* Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux model added. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Copying FlowMatchEulerScheduler over Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * WIP: Start to test the pipeline forward pass Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference pipeline runs with offloading function Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Start to test image generation Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Decoding with VAE part has been verified. Still need to check the denoising loop. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * The inference pipeline is verified. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add arg parsers and refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Tested on multi batch sizes and prompts. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Move shceduler to sampler folder Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Merging folders. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Tested after path changing. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Move MMDIT block to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add joint attention and single attention to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Joint attention updated Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Remove redundant importing Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor to inherit megatron module Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Adding mockdata Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * DDP training works Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux controlnet training components while not tested yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux training with DDP tested on 1 GPU Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux and controlnet now could train on precached mode. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Custom FSDP path added to megatron parallel. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bug fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Typo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bypass the no grad issue when no single layers exists Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Let the flux model's dtype autocast before FSDP wrapping * fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..." * Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * single block attention.linear_proj.bias must not require grads after refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * use cpu initialization to avoid OOM Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Set up flux training script with tp Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * SDXL fid image generation script updated. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Mcore self attention API changed Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add a dummy task encoder for raw image inputs Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Support loading crudedataset via energon dataloader Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Default save last to True Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference pipeline Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference script Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image resize mode update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Remove unnecessary bias to avoid sharding issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Handle MCore custom fsdp checkpoint load (#11621) * general handle custom_fsdp checkpoint load * Apply isort and black reformatting Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> * Checkpoint naming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger WIP Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger works fine Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * save hint and output to image logger. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update flux controlnet training step Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add model connector and try to load from dist ckpt but failed. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Renaming and refactoring submodel configs for nemo run compatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Nemo run script works for basic testing recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added tp2 training factory Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added convergence recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux training scripts Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Controlnet inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Moving scripts to correct folder and modify headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Doc strings update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * pylint correction Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add import guard since custom fsdp is not merged to mcore yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add copy right headers and correct code check Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Sharded state dict method tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Improve hf ckpt converting and saving logic Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update recipes Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add notebook Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> --------- Signed-off-by: mingyuanm <mingyuanm@nvidia.com> Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com> Co-authored-by: jianbinc <shjwudp@gmail.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com>
parthmannan
pushed a commit
that referenced
this pull request
Jan 28, 2025
* Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux model added. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Copying FlowMatchEulerScheduler over Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * WIP: Start to test the pipeline forward pass Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference pipeline runs with offloading function Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Start to test image generation Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Decoding with VAE part has been verified. Still need to check the denoising loop. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * The inference pipeline is verified. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add arg parsers and refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Tested on multi batch sizes and prompts. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Move shceduler to sampler folder Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Merging folders. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Tested after path changing. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Move MMDIT block to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add joint attention and single attention to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Joint attention updated Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Remove redundant importing Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor to inherit megatron module Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Adding mockdata Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * DDP training works Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux controlnet training components while not tested yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux training with DDP tested on 1 GPU Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux and controlnet now could train on precached mode. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Custom FSDP path added to megatron parallel. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bug fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Typo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bypass the no grad issue when no single layers exists Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Let the flux model's dtype autocast before FSDP wrapping * fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..." * Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * single block attention.linear_proj.bias must not require grads after refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * use cpu initialization to avoid OOM Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Set up flux training script with tp Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * SDXL fid image generation script updated. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Mcore self attention API changed Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add a dummy task encoder for raw image inputs Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Support loading crudedataset via energon dataloader Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Default save last to True Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference pipeline Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference script Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image resize mode update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Remove unnecessary bias to avoid sharding issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Handle MCore custom fsdp checkpoint load (#11621) * general handle custom_fsdp checkpoint load * Apply isort and black reformatting Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> * Checkpoint naming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger WIP Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger works fine Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * save hint and output to image logger. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update flux controlnet training step Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add model connector and try to load from dist ckpt but failed. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Renaming and refactoring submodel configs for nemo run compatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Nemo run script works for basic testing recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added tp2 training factory Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added convergence recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux training scripts Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Controlnet inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Moving scripts to correct folder and modify headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Doc strings update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * pylint correction Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add import guard since custom fsdp is not merged to mcore yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add copy right headers and correct code check Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Sharded state dict method tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Improve hf ckpt converting and saving logic Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update recipes Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add notebook Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> --------- Signed-off-by: mingyuanm <mingyuanm@nvidia.com> Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com> Co-authored-by: jianbinc <shjwudp@gmail.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> Signed-off-by: Parth Mannan <pmannan@nvidia.com>
abhinavg4
pushed a commit
that referenced
this pull request
Jan 30, 2025
* Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux model added. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Copying FlowMatchEulerScheduler over Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * WIP: Start to test the pipeline forward pass Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference pipeline runs with offloading function Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Start to test image generation Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Decoding with VAE part has been verified. Still need to check the denoising loop. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * The inference pipeline is verified. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add arg parsers and refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Tested on multi batch sizes and prompts. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Move shceduler to sampler folder Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Merging folders. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Tested after path changing. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Move MMDIT block to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add joint attention and single attention to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Joint attention updated Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Remove redundant importing Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor to inherit megatron module Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Adding mockdata Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * DDP training works Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux controlnet training components while not tested yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux training with DDP tested on 1 GPU Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux and controlnet now could train on precached mode. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Custom FSDP path added to megatron parallel. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bug fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Typo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bypass the no grad issue when no single layers exists Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Let the flux model's dtype autocast before FSDP wrapping * fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..." * Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * single block attention.linear_proj.bias must not require grads after refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * use cpu initialization to avoid OOM Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Set up flux training script with tp Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * SDXL fid image generation script updated. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Mcore self attention API changed Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add a dummy task encoder for raw image inputs Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Support loading crudedataset via energon dataloader Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Default save last to True Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference pipeline Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference script Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image resize mode update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Remove unnecessary bias to avoid sharding issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Handle MCore custom fsdp checkpoint load (#11621) * general handle custom_fsdp checkpoint load * Apply isort and black reformatting Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> * Checkpoint naming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger WIP Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger works fine Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * save hint and output to image logger. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update flux controlnet training step Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add model connector and try to load from dist ckpt but failed. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Renaming and refactoring submodel configs for nemo run compatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Nemo run script works for basic testing recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added tp2 training factory Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added convergence recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux training scripts Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Controlnet inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Moving scripts to correct folder and modify headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Doc strings update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * pylint correction Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add import guard since custom fsdp is not merged to mcore yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add copy right headers and correct code check Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Code Scan Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Minor fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> --------- Signed-off-by: mingyuanm <mingyuanm@nvidia.com> Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com> Co-authored-by: jianbinc <shjwudp@gmail.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> Signed-off-by: Abhinav Garg <abhgarg@nvidia.com>
abhinavg4
pushed a commit
that referenced
this pull request
Jan 30, 2025
* Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux model added. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Copying FlowMatchEulerScheduler over Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * WIP: Start to test the pipeline forward pass Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference pipeline runs with offloading function Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Start to test image generation Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Decoding with VAE part has been verified. Still need to check the denoising loop. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * The inference pipeline is verified. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add arg parsers and refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Tested on multi batch sizes and prompts. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Move shceduler to sampler folder Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Merging folders. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Tested after path changing. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Move MMDIT block to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add joint attention and single attention to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Joint attention updated Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Remove redundant importing Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor to inherit megatron module Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Adding mockdata Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * DDP training works Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux controlnet training components while not tested yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux training with DDP tested on 1 GPU Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux and controlnet now could train on precached mode. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Custom FSDP path added to megatron parallel. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bug fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Typo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bypass the no grad issue when no single layers exists Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Let the flux model's dtype autocast before FSDP wrapping * fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..." * Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * single block attention.linear_proj.bias must not require grads after refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * use cpu initialization to avoid OOM Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Set up flux training script with tp Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * SDXL fid image generation script updated. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Mcore self attention API changed Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add a dummy task encoder for raw image inputs Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Support loading crudedataset via energon dataloader Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Default save last to True Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference pipeline Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference script Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image resize mode update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Remove unnecessary bias to avoid sharding issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Handle MCore custom fsdp checkpoint load (#11621) * general handle custom_fsdp checkpoint load * Apply isort and black reformatting Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> * Checkpoint naming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger WIP Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger works fine Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * save hint and output to image logger. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update flux controlnet training step Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add model connector and try to load from dist ckpt but failed. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Renaming and refactoring submodel configs for nemo run compatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Nemo run script works for basic testing recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added tp2 training factory Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added convergence recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux training scripts Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Controlnet inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Moving scripts to correct folder and modify headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Doc strings update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * pylint correction Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add import guard since custom fsdp is not merged to mcore yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add copy right headers and correct code check Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Sharded state dict method tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Improve hf ckpt converting and saving logic Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update recipes Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add notebook Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> --------- Signed-off-by: mingyuanm <mingyuanm@nvidia.com> Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com> Co-authored-by: jianbinc <shjwudp@gmail.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> Signed-off-by: Abhinav Garg <abhgarg@nvidia.com>
youngeunkwon0405
pushed a commit
to youngeunkwon0405/NeMo
that referenced
this pull request
Feb 10, 2025
…#11794) * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux model added. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Copying FlowMatchEulerScheduler over Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * WIP: Start to test the pipeline forward pass Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference pipeline runs with offloading function Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Start to test image generation Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Decoding with VAE part has been verified. Still need to check the denoising loop. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * The inference pipeline is verified. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add arg parsers and refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Tested on multi batch sizes and prompts. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Move shceduler to sampler folder Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Merging folders. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Tested after path changing. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Move MMDIT block to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add joint attention and single attention to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Joint attention updated Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Remove redundant importing Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor to inherit megatron module Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Adding mockdata Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * DDP training works Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux controlnet training components while not tested yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux training with DDP tested on 1 GPU Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux and controlnet now could train on precached mode. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Custom FSDP path added to megatron parallel. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bug fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Typo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bypass the no grad issue when no single layers exists Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Let the flux model's dtype autocast before FSDP wrapping * fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..." * Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * single block attention.linear_proj.bias must not require grads after refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * use cpu initialization to avoid OOM Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Set up flux training script with tp Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * SDXL fid image generation script updated. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Mcore self attention API changed Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add a dummy task encoder for raw image inputs Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Support loading crudedataset via energon dataloader Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Default save last to True Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference pipeline Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference script Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image resize mode update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Remove unnecessary bias to avoid sharding issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Handle MCore custom fsdp checkpoint load (NVIDIA-NeMo#11621) * general handle custom_fsdp checkpoint load * Apply isort and black reformatting Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> * Checkpoint naming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger WIP Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger works fine Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * save hint and output to image logger. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update flux controlnet training step Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add model connector and try to load from dist ckpt but failed. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Renaming and refactoring submodel configs for nemo run compatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Nemo run script works for basic testing recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added tp2 training factory Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added convergence recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux training scripts Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Controlnet inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Moving scripts to correct folder and modify headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Doc strings update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * pylint correction Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add import guard since custom fsdp is not merged to mcore yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add copy right headers and correct code check Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Code Scan Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Minor fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> --------- Signed-off-by: mingyuanm <mingyuanm@nvidia.com> Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com> Co-authored-by: jianbinc <shjwudp@gmail.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
youngeunkwon0405
pushed a commit
to youngeunkwon0405/NeMo
that referenced
this pull request
Feb 10, 2025
…11927) * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux model added. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Copying FlowMatchEulerScheduler over Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * WIP: Start to test the pipeline forward pass Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference pipeline runs with offloading function Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Start to test image generation Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Decoding with VAE part has been verified. Still need to check the denoising loop. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * The inference pipeline is verified. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add arg parsers and refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Tested on multi batch sizes and prompts. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Move shceduler to sampler folder Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Merging folders. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Tested after path changing. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Move MMDIT block to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add joint attention and single attention to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Joint attention updated Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Remove redundant importing Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor to inherit megatron module Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Adding mockdata Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * DDP training works Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux controlnet training components while not tested yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux training with DDP tested on 1 GPU Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux and controlnet now could train on precached mode. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Custom FSDP path added to megatron parallel. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bug fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Typo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bypass the no grad issue when no single layers exists Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Let the flux model's dtype autocast before FSDP wrapping * fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..." * Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * single block attention.linear_proj.bias must not require grads after refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * use cpu initialization to avoid OOM Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Set up flux training script with tp Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * SDXL fid image generation script updated. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Mcore self attention API changed Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add a dummy task encoder for raw image inputs Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Support loading crudedataset via energon dataloader Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Default save last to True Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference pipeline Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference script Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image resize mode update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Remove unnecessary bias to avoid sharding issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Handle MCore custom fsdp checkpoint load (NVIDIA-NeMo#11621) * general handle custom_fsdp checkpoint load * Apply isort and black reformatting Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> * Checkpoint naming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger WIP Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger works fine Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * save hint and output to image logger. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update flux controlnet training step Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add model connector and try to load from dist ckpt but failed. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Renaming and refactoring submodel configs for nemo run compatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Nemo run script works for basic testing recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added tp2 training factory Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added convergence recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux training scripts Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Controlnet inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Moving scripts to correct folder and modify headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Doc strings update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * pylint correction Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add import guard since custom fsdp is not merged to mcore yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add copy right headers and correct code check Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Sharded state dict method tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Improve hf ckpt converting and saving logic Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update recipes Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add notebook Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> --------- Signed-off-by: mingyuanm <mingyuanm@nvidia.com> Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com> Co-authored-by: jianbinc <shjwudp@gmail.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Victor49152
added a commit
that referenced
this pull request
Feb 28, 2025
* Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux model added. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Copying FlowMatchEulerScheduler over Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * WIP: Start to test the pipeline forward pass Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference pipeline runs with offloading function Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Start to test image generation Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Decoding with VAE part has been verified. Still need to check the denoising loop. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * The inference pipeline is verified. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add arg parsers and refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Tested on multi batch sizes and prompts. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Move shceduler to sampler folder Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Merging folders. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Tested after path changing. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Move MMDIT block to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add joint attention and single attention to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Joint attention updated Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Remove redundant importing Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor to inherit megatron module Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Adding mockdata Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * DDP training works Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux controlnet training components while not tested yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux training with DDP tested on 1 GPU Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux and controlnet now could train on precached mode. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Custom FSDP path added to megatron parallel. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bug fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Typo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bypass the no grad issue when no single layers exists Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Let the flux model's dtype autocast before FSDP wrapping * fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..." * Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * single block attention.linear_proj.bias must not require grads after refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * use cpu initialization to avoid OOM Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Set up flux training script with tp Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * SDXL fid image generation script updated. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Mcore self attention API changed Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add a dummy task encoder for raw image inputs Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Support loading crudedataset via energon dataloader Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Default save last to True Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference pipeline Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference script Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image resize mode update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Remove unnecessary bias to avoid sharding issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Handle MCore custom fsdp checkpoint load (#11621) * general handle custom_fsdp checkpoint load * Apply isort and black reformatting Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> * Checkpoint naming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger WIP Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger works fine Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * save hint and output to image logger. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update flux controlnet training step Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add model connector and try to load from dist ckpt but failed. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Renaming and refactoring submodel configs for nemo run compatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Nemo run script works for basic testing recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added tp2 training factory Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added convergence recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux training scripts Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Controlnet inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Moving scripts to correct folder and modify headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Doc strings update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * pylint correction Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add import guard since custom fsdp is not merged to mcore yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add copy right headers and correct code check Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Sharded state dict method tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Improve hf ckpt converting and saving logic Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update recipes Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add notebook Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add CI recipe file Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor names Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add guard Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com> * Apply isort and black reformatting Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com> * Fix Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com> * Apply isort and black reformatting Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com> * fix known issues Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com> * Add import guard Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com> * fix issues importing Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com> * Apply isort and black reformatting Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com> * Update flux_535m.py Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com> * Adding necessary docstrings Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Pylint fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming and fix tutorial Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Update and test the tutorial Signed-off-by: mingyuanm <mingyuanm@nvidia.com> --------- Signed-off-by: mingyuanm <mingyuanm@nvidia.com> Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com> Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com> Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com> Signed-off-by: Mingyuan Ma <111467530+Victor49152@users.noreply.github.com> Co-authored-by: mingyuanm <mingyuanm@nvidia.com> Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com> Co-authored-by: jianbinc <shjwudp@gmail.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: Mingyuan Ma <111467530+Victor49152@users.noreply.github.com> Co-authored-by: yaoyu-33 <yaoyu-33@users.noreply.github.com>
Agoniii
pushed a commit
to Agoniii/NeMo
that referenced
this pull request
Mar 6, 2025
* Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux model added. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Copying FlowMatchEulerScheduler over Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * WIP: Start to test the pipeline forward pass Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Vae added and matched flux checkpoint Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference pipeline runs with offloading function Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Start to test image generation Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Decoding with VAE part has been verified. Still need to check the denoising loop. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * The inference pipeline is verified. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add arg parsers and refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Tested on multi batch sizes and prompts. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Move shceduler to sampler folder Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Merging folders. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Tested after path changing. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Move MMDIT block to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add joint attention and single attention to NeMo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Joint attention updated Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Remove redundant importing Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor to inherit megatron module Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Adding mockdata Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * DDP training works Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux controlnet training components while not tested yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux training with DDP tested on 1 GPU Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Flux and controlnet now could train on precached mode. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Custom FSDP path added to megatron parallel. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bug fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Typo Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Bypass the no grad issue when no single layers exists Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * A hacky way to wrap frozen flux into FSDP to reproduce illegal memory issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Let the flux model's dtype autocast before FSDP wrapping * fix RuntimeError: "Output 0 of SliceBackward0 is a view and is being modified inplace..." * Add a wrapper to flux controlnet so they are all wrapped into FSDP automatically Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Get rid of concat op in flux single transformer Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * single block attention.linear_proj.bias must not require grads after refactoring Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * use cpu initialization to avoid OOM Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Set up flux training script with tp Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * SDXL fid image generation script updated. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Mcore self attention API changed Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add a dummy task encoder for raw image inputs Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Support loading crudedataset via energon dataloader Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Default save last to True Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference pipeline Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add controlnet inference script Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image resize mode update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Remove unnecessary bias to avoid sharding issue. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Handle MCore custom fsdp checkpoint load (NVIDIA-NeMo#11621) * general handle custom_fsdp checkpoint load * Apply isort and black reformatting Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> * Checkpoint naming Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger WIP Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Image logger works fine Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * save hint and output to image logger. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update flux controlnet training step Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add model connector and try to load from dist ckpt but failed. Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Renaming and refactoring submodel configs for nemo run compatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Nemo run script works for basic testing recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added tp2 training factory Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added convergence recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Added flux training scripts Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Controlnet inference script tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Moving scripts to correct folder and modify headers Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Doc strings update Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * pylint correction Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add import guard since custom fsdp is not merged to mcore yet Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add copy right headers and correct code check Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Dist loading with TP2 resolved. Convergence not tested because of Mcore incompatibility Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Sharded state dict method tested Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Improve hf ckpt converting and saving logic Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update recipes Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add notebook Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Add CI recipe file Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Update recipe Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Refactor names Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Add guard Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com> * Apply isort and black reformatting Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com> * Fix Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com> * Apply isort and black reformatting Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com> * fix known issues Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com> * Add import guard Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com> * fix issues importing Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com> * Apply isort and black reformatting Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com> * Update flux_535m.py Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com> * Adding necessary docstrings Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Pylint fix Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Renaming and fix tutorial Signed-off-by: mingyuanm <mingyuanm@nvidia.com> * Apply isort and black reformatting Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> * Update and test the tutorial Signed-off-by: mingyuanm <mingyuanm@nvidia.com> --------- Signed-off-by: mingyuanm <mingyuanm@nvidia.com> Signed-off-by: Victor49152 <Victor49152@users.noreply.github.com> Signed-off-by: shjwudp <shjwudp@users.noreply.github.com> Signed-off-by: artbataev <artbataev@users.noreply.github.com> Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com> Signed-off-by: yaoyu-33 <yaoyu-33@users.noreply.github.com> Signed-off-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com> Signed-off-by: Mingyuan Ma <111467530+Victor49152@users.noreply.github.com> Co-authored-by: mingyuanm <mingyuanm@nvidia.com> Co-authored-by: Victor49152 <Victor49152@users.noreply.github.com> Co-authored-by: jianbinc <shjwudp@gmail.com> Co-authored-by: shjwudp <shjwudp@users.noreply.github.com> Co-authored-by: artbataev <artbataev@users.noreply.github.com> Co-authored-by: Mingyuan Ma <111467530+Victor49152@users.noreply.github.com> Co-authored-by: yaoyu-33 <yaoyu-33@users.noreply.github.com> Signed-off-by: Xue Huang <xueh@nvidia.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do ?
Add a one line overview of what this PR aims to accomplish.
Collection: [Note which collection this PR will affect]
Changelog
Usage
# Add a code snippet demonstrating how to use thisGitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information