Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/source/Instruction/Command-line-parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -466,7 +466,7 @@ Vera使用`target_modules`、`target_regex`、`modules_to_save`三个参数,
- add_version: 在output_dir上额外增加目录`'<版本号>-<时间戳>'`防止权重覆盖,默认为True。
- check_model: 检查本地模型文件有损坏或修改并给出提示,默认为True。**如果是断网环境,请设置为False**。
- 🔥create_checkpoint_symlink: 额外创建checkpoint软链接,方便书写自动化训练脚本。best_model和last_model的软链接路径分别为f'{output_dir}/best'和f'{output_dir}/last'。
- 🔥packing: 将不同长度的数据样本打包成**近似**统一长度的样本(packing能保证不对完整的序列进行切分),实现训练时各节点与进程的负载均衡(避免长文本拖慢短文本的训练速度),从而提高GPU利用率,保持显存占用稳定。当使用 `--attn_impl flash_attn` 时,可确保packed样本内的不同序列之间相互独立,互不可见。该参数默认为`False`,目前支持 CPT/SFT/DPO/KTO/GKD。注意:**packing会导致数据集样本数减少,请自行调节梯度累加数和学习率**。
- 🔥packing: 使用`padding_free`的方式将不同长度的数据样本打包成**近似**统一长度的样本(packing能保证不对完整的序列进行切分),实现训练时各节点与进程的负载均衡(避免长文本拖慢短文本的训练速度),从而提高GPU利用率,保持显存占用稳定。当使用 `--attn_impl flash_attn` 时,可确保packed样本内的不同序列之间相互独立,互不可见。该参数默认为`False`,目前支持 CPT/SFT/DPO/KTO/GKD。注意:**packing会导致数据集样本数减少,请自行调节梯度累加数和学习率**。
- "ms-swift>=3.12"新支持了embedding/reranker/seq_cls任务的packing。
- packing_length: packing的长度。默认为None,设置为max_length。
- packing_num_proc: packing的进程数,默认为1。需要注意的是,不同的`packing_num_proc`,最终形成的packed数据集是不同的。(该参数在流式packing时不生效)。通常不需要修改该值,packing速度远快于tokenize速度。
Expand Down
2 changes: 1 addition & 1 deletion docs/source/Megatron-SWIFT/Command-line-parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -300,7 +300,7 @@ Megatron训练参数继承自Megatron参数和基本参数(**与ms-swift共用
- 提示:在日志中打印的"learning rate"为llm的学习率。
- aligner_lr: 当训练多模态大模型时,该参数指定aligner的学习率,默认为None,等于learning_rate。
- gradient_checkpointing_kwargs: 传入`torch.utils.checkpoint`中的参数。例如设置为`--gradient_checkpointing_kwargs '{"use_reentrant": false}'`。默认为None。该参数只对`vit_gradient_checkpointing`生效。
- 🔥packing: 将不同长度的数据样本打包成**近似**统一长度的样本(packing能保证不对完整的序列进行切分),实现训练时各节点与进程的负载均衡(避免长文本拖慢短文本的训练速度),从而提高GPU利用率,保持显存占用稳定。当使用 `--attention_backend flash` 时,可确保packed样本内的不同序列之间相互独立,互不可见(除Qwen3-Next,因为含有linear-attention)。该参数默认为`False`。Megatron-SWIFT的所有训练任务都支持该参数。注意:**packing会导致数据集样本数减少,请自行调节梯度累加数和学习率**。
- 🔥packing: 使用`padding_free`的方式将不同长度的数据样本打包成**近似**统一长度的样本(packing能保证不对完整的序列进行切分),实现训练时各节点与进程的负载均衡(避免长文本拖慢短文本的训练速度),从而提高GPU利用率,保持显存占用稳定。当使用 `--attention_backend flash` 时,可确保packed样本内的不同序列之间相互独立,互不可见(除Qwen3-Next,因为含有linear-attention)。该参数默认为`False`。Megatron-SWIFT的所有训练任务都支持该参数。注意:**packing会导致数据集样本数减少,请自行调节梯度累加数和学习率**。
- packing_length: packing的长度。默认为None,设置为max_length。
- packing_num_proc: packing的进程数,默认为1。需要注意的是,不同的`packing_num_proc`,最终形成的packed数据集是不同的。(该参数在流式packing时不生效)。通常不需要修改该值,packing速度远快于tokenize速度。
- streaming: 流式读取并处理数据集,默认False。(流式数据集的随机并不彻底,可能导致loss波动剧烈。)
Expand Down
2 changes: 1 addition & 1 deletion docs/source_en/Instruction/Command-line-parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -476,7 +476,7 @@ Training arguments include the [base arguments](#base-arguments), [Seq2SeqTraine
- add_version: Add directory to output_dir with `'<version>-<timestamp>'` to prevent weight overwrite, default is True.
- check_model: Check local model files for corruption or modification and give a prompt, default is True. **If in an offline environment, please set to False.**
- 🔥create_checkpoint_symlink: Creates additional checkpoint symlinks to facilitate writing automated training scripts. The symlink paths for `best_model` and `last_model` are `f'{output_dir}/best'` and `f'{output_dir}/last'` respectively.
- 🔥packing: Packs data samples of different lengths into samples of **approximately** uniform length (packing ensures that complete sequences are not split), achieving load balancing across nodes and processes during training (preventing long texts from slowing down short text training), thereby improving GPU utilization and maintaining stable memory usage. When using `--attn_impl flash_attn`, it ensures that different sequences within packed samples remain independent and invisible to each other. This parameter defaults to `False` and currently supports CPT/SFT/DPO/KTO/GKD. Note: **packing will reduce the number of dataset samples, please adjust gradient accumulation steps and learning rate accordingly**.
- 🔥packing: Use the `padding_free` method to pack data samples of different lengths into samples of **approximately** uniform length (packing ensures that complete sequences are not split), achieving load balancing across nodes and processes during training (preventing long texts from slowing down short text training), thereby improving GPU utilization and maintaining stable memory usage. When using `--attn_impl flash_attn`, it ensures that different sequences within packed samples remain independent and invisible to each other. This parameter defaults to `False` and currently supports CPT/SFT/DPO/KTO/GKD. Note: **packing will reduce the number of dataset samples, please adjust gradient accumulation steps and learning rate accordingly**.
- "ms-swift>=3.12" has newly added support for packing in embedding/reranker/seq_cls tasks.
- packing_length: the length to use for packing. Defaults to None, in which case it is set to max_length.
- packing_num_proc: Number of processes for packing, default is 1. Note that different values of `packing_num_proc` will result in different packed datasets. (This parameter does not take effect during streaming packing). Usually there is no need to modify this value, as packing speed is much faster than tokenization speed.
Expand Down
2 changes: 1 addition & 1 deletion docs/source_en/Megatron-SWIFT/Command-line-parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -319,7 +319,7 @@ Megatron training parameters are inherited from Megatron parameters and basic pa
- Note: The "learning rate" printed in the logs is the learning rate of the LLM.
- aligner_lr: Specifies the learning rate for the aligner module in multimodal models. Default is `None`, same as `learning_rate`.
- gradient_checkpointing_kwargs: Arguments passed to `torch.utils.checkpoint`. For example: set `--gradient_checkpointing_kwargs '{"use_reentrant": false}'`. Defaults to `None`. This parameter only takes effect when `vit_gradient_checkpointing` is enabled.
- 🔥packing: Packs data samples of different lengths into samples of **approximately** uniform length (packing ensures that complete sequences are not split), achieving load balancing across nodes and processes during training (preventing long texts from slowing down short text training), thereby improving GPU utilization and maintaining stable memory usage. When using `--attention_backend flash`, it ensures that different sequences within packed samples remain independent and invisible to each other (except for Qwen3-Next, which contains linear-attention). This parameter defaults to `False`. All training tasks in Megatron-SWIFT support this parameter. Note: **packing will reduce the number of dataset samples, please adjust gradient accumulation steps and learning rate accordingly**.
- 🔥packing: Use the `padding_free` method to pack data samples of different lengths into samples of **approximately** uniform length (packing ensures that complete sequences are not split), achieving load balancing across nodes and processes during training (preventing long texts from slowing down short text training), thereby improving GPU utilization and maintaining stable memory usage. When using `--attention_backend flash`, it ensures that different sequences within packed samples remain independent and invisible to each other (except for Qwen3-Next, which contains linear-attention). This parameter defaults to `False`. All training tasks in Megatron-SWIFT support this parameter. Note: **packing will reduce the number of dataset samples, please adjust gradient accumulation steps and learning rate accordingly**.
- packing_length: the length to use for packing. Defaults to None, in which case it is set to max_length.
- packing_num_proc: Number of processes for packing, default is 1. Note that different values of `packing_num_proc` will result in different packed datasets. (This parameter does not take effect during streaming packing). Usually there is no need to modify this value, as packing speed is much faster than tokenization speed.
- streaming: Stream data loading and processing, default is False. (The shuffling of streaming datasets is not thorough, which may lead to severe loss fluctuations.)
Expand Down
8 changes: 1 addition & 7 deletions swift/megatron/init.py
Original file line number Diff line number Diff line change
Expand Up @@ -670,10 +670,8 @@ def _write_item(self, *args, **kwargs):

def _patch_mrope():
from megatron.core.models.common.embeddings.rotary_pos_embedding import MultimodalRotaryEmbedding
from megatron.core import parallel_state
import megatron.core
from megatron.core.models.common.embeddings.rope_utils import (get_pos_emb_on_this_cp_rank,
_apply_rotary_pos_emb_bshd)
from megatron.core.models.common.embeddings.rope_utils import _apply_rotary_pos_emb_bshd
from megatron.core.models.common.embeddings import rope_utils
from megatron.training import get_args

Expand Down Expand Up @@ -729,10 +727,6 @@ def forward(self, position_ids, mrope_section: List[int], packed_seq: bool = Fal

# shape (seq_length, bs, 1, 2 * dim)
emb = emb[..., None, :].transpose(0, 1).contiguous()
if parallel_state.get_context_parallel_world_size() > 1 and not packed_seq:
# slice rotary_pos_emb along sequence dimension and select the parition of the current
# CP rank
emb = get_pos_emb_on_this_cp_rank(emb, 0, parallel_state.get_context_parallel_group())
return emb

MultimodalRotaryEmbedding.forward = forward
Expand Down
6 changes: 3 additions & 3 deletions swift/megatron/model/mm_gpt/qwen3_vl.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,12 +122,12 @@ def _get_inputs_embeds(inputs_embeds, inputs, visual, processor, config):
# compat cp
args = get_args()
if args.context_parallel_size > 1:
assert packed_seq_params is not None
device = visual_pos_masks.device
cp_mask = torch.full(visual_pos_masks.shape[:1], -1, dtype=torch.long, device=device)
cp_mask[visual_pos_masks[:, 0]] = torch.arange(visual_pos_masks.sum(), device=device)
cp_mask = split_cp_inputs(cp_mask, packed_seq_params.cu_seqlens_q, 0)
visual_pos_masks = split_cp_inputs(visual_pos_masks, packed_seq_params.cu_seqlens_q, 0)
cu_seqlens = getattr(packed_seq_params, 'cu_seqlens_q', None)
cp_mask = split_cp_inputs(cp_mask, cu_seqlens, 0)
visual_pos_masks = split_cp_inputs(visual_pos_masks, cu_seqlens, 0)
deepstack_visual_embeds = deepstack_visual_embeds[:, cp_mask[(cp_mask != -1)]]
# compat sp
tp_world_size = parallel_state.get_tensor_model_parallel_world_size()
Expand Down
2 changes: 1 addition & 1 deletion swift/megatron/model/mm_gpt_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ def forward(_self, input_):
kwargs.update(res)
res = inputs_embeds
if args.context_parallel_size > 1:
res = split_cp_inputs(res, packed_seq_params.cu_seqlens_q, 1)
res = split_cp_inputs(res, getattr(packed_seq_params, 'cu_seqlens_q', None), 1)
if reduce_scatter_embeddings:
res = res.transpose(0, 1).contiguous()
group_kwargs = {'group': _self.tp_group} if mcore_013 else {}
Expand Down
19 changes: 9 additions & 10 deletions swift/megatron/trainers/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@
from megatron.core.distributed import DistributedDataParallel as DDP
from megatron.core.optimizer import ChainedOptimizer
from megatron.core.packed_seq_params import PackedSeqParams
from megatron.core.utils import get_batch_on_this_cp_rank as mcore_get_batch_on_this_cp_rank
from megatron.training import get_args, get_wandb_writer
from packaging import version

Expand Down Expand Up @@ -86,17 +85,19 @@ def get_packed_seq_params(position_ids: torch.Tensor) -> PackedSeqParams:
qkv_format='thd')


def split_cp_inputs(inputs: torch.Tensor, cu_seqlens: torch.Tensor, dim: int):
# TODO: compat bshd
def split_cp_inputs(inputs: torch.Tensor, cu_seqlens: Optional[torch.Tensor], dim: int):
if dim < 0:
dim = (dim + inputs.ndim) % inputs.ndim
new_inputs = []
cp_size = mpu.get_context_parallel_world_size()
cp_rank = mpu.get_context_parallel_rank()
for i in range(cu_seqlens.shape[0] - 1):
slices = [slice(None)] * inputs.ndim
slices[dim] = slice(cu_seqlens[i], cu_seqlens[i + 1])
val = inputs[tuple(slices)]
for i in range(1 if cu_seqlens is None else (cu_seqlens.shape[0] - 1)):
if cu_seqlens is None:
val = inputs
else:
slices = [slice(None)] * inputs.ndim
slices[dim] = slice(cu_seqlens[i], cu_seqlens[i + 1])
val = inputs[tuple(slices)]
view_shape = (*inputs.shape[:dim], 2 * cp_size, val.shape[dim] // (2 * cp_size), *inputs.shape[dim + 1:])
val = val.view(view_shape)
index = torch.tensor([cp_rank, (2 * cp_size - cp_rank - 1)], device='cpu',
Expand Down Expand Up @@ -127,15 +128,13 @@ def get_batch_on_this_cp_rank(batch: Dict[str, Any]):
keys.append('input_ids')

packed_seq_params = batch.get('packed_seq_params')
if packed_seq_params is None:
return mcore_get_batch_on_this_cp_rank(batch)
for key, val in batch.items():
if key not in keys:
continue
if args.task_type == 'seq_cls' and key == 'labels':
continue
if val is not None:
batch[key] = split_cp_inputs(val, packed_seq_params.cu_seqlens_q, -1)
batch[key] = split_cp_inputs(val, getattr(packed_seq_params, 'cu_seqlens_q', None), -1)

return batch

Expand Down
Loading