Skip to content

fix multi lora training#156

Merged
tastelikefeet merged 2 commits intomodelscope:mainfrom
tastelikefeet:fix/0414-1
Apr 14, 2026
Merged

fix multi lora training#156
tastelikefeet merged 2 commits intomodelscope:mainfrom
tastelikefeet:fix/0414-1

Conversation

@tastelikefeet
Copy link
Copy Markdown
Collaborator

PR type

  • Bug Fix
  • New Feature
  • Document Updates
  • More Models or Datasets Support

PR information

Write the detail information belongs to this PR.

Experiment results

Paste your experiment result here(if needed).

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an optimizer_context to isolate parameters for specific LoRA adapters during optimization in Megatron and ensures all LoRA parameters are marked as trainable to support MegatronDDP gradient buffers. It also includes a safety check for missing labels in the input feature template. Review feedback suggests making the use_distributed_optimizer setting configurable rather than hardcoded to avoid silently disabling performance features and refining the regex pattern in the optimizer context to handle parameter names that may not start with a leading dot.

@tastelikefeet tastelikefeet merged commit 5885038 into modelscope:main Apr 14, 2026
1 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants