Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

自定义数据集训练的时候出现 ValueError: Expected input batch_size (103) to match target batch_size (95). #3765

Closed
1 task done
keesh0410 opened this issue May 16, 2024 · 0 comments
Labels
wontfix This will not be worked on

Comments

@keesh0410
Copy link

Reminder

  • I have read the README and searched the existing issues.

Reproduction

llama3_lora_sft.yaml 的数据如下
`# model
model_name_or_path: /home/caiqixu/Projects/xcy/Aextra/modelscope/LLM-Research/Meta-Llama-3-8B-Instruct

method

stage: sft
do_train: true
finetuning_type: lora
lora_target: q_proj,v_proj

ddp

ddp_timeout: 180000000

dataset

dataset: cotcc
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16

output

output_dir: /home/caiqixu/Projects/xcy/Aextra/adapters/llama3_sharegpt4_sft
logging_steps: 10
save_steps: 1000
plot_loss: true
overwrite_output_dir: true

train

per_device_train_batch_size: 1
gradient_accumulation_steps: 2
learning_rate: 0.0001
num_train_epochs: 5.0
lr_scheduler_type: cosine
warmup_steps: 0.1
fp16: true

eval

val_size: 0.1
per_device_eval_batch_size: 1
evaluation_strategy: steps
eval_steps: 100
`

single_config.yaml如下
`compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1 # the number of nodes
num_processes: 8 # the number of GPUs in all nodes
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

main_process_port: 29505`

Expected behavior

希望代码正常进行在8卡gpu上的训练。

System Info

  • transformers version: 4.37.2
  • Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31
  • Python version: 3.10.2
  • Huggingface_hub version: 0.22.2
  • Safetensors version: 0.4.2
  • Accelerate version: 0.28.0
  • Accelerate config: not found
  • PyTorch version (GPU?): 2.1.2+cu118 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: yes
  • Using distributed or parallel set-up in script?: yes

Others

[INFO|modeling_utils.py:4350] 2024-05-16 01:36:38,516 >> All model checkpoint weights were used when initializing LlamaForCausalLM.

[INFO|modeling_utils.py:4358] 2024-05-16 01:36:38,516 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at /home/caiqixu/Projects/xcy/Aextra/modelscope/LLM-Research/Meta-Llama-3-8B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training.
[INFO|configuration_utils.py:779] 2024-05-16 01:36:38,518 >> loading configuration file /home/caiqixu/Projects/xcy/Aextra/modelscope/LLM-Research/Meta-Llama-3-8B-Instruct/generation_config.json
[INFO|configuration_utils.py:826] 2024-05-16 01:36:38,519 >> Generate config GenerationConfig {
"bos_token_id": 128000,
"eos_token_id": [
128001,
128009
]
}

/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to Accelerator is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an accelerate.DataLoaderConfiguration instead:
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False)
warnings.warn(
[INFO|trainer.py:571] 2024-05-16 01:36:39,277 >> Using auto half precision backend
/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to Accelerator is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an accelerate.DataLoaderConfiguration instead:
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False)
warnings.warn(
/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to Accelerator is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an accelerate.DataLoaderConfiguration instead:
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False)
warnings.warn(
/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to Accelerator is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an accelerate.DataLoaderConfiguration instead:
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False)
warnings.warn(
/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to Accelerator is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an accelerate.DataLoaderConfiguration instead:
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False)
warnings.warn(
/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to Accelerator is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an accelerate.DataLoaderConfiguration instead:
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False)
warnings.warn(
/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to Accelerator is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an accelerate.DataLoaderConfiguration instead:
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False)
warnings.warn(
/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to Accelerator is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an accelerate.DataLoaderConfiguration instead:
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False)
warnings.warn(
[INFO|trainer.py:1721] 2024-05-16 01:36:41,436 >> ***** Running training *****
[INFO|trainer.py:1722] 2024-05-16 01:36:41,436 >> Num examples = 900
[INFO|trainer.py:1723] 2024-05-16 01:36:41,436 >> Num Epochs = 5
[INFO|trainer.py:1724] 2024-05-16 01:36:41,436 >> Instantaneous batch size per device = 1
[INFO|trainer.py:1727] 2024-05-16 01:36:41,436 >> Total train batch size (w. parallel, distributed & accumulation) = 16
[INFO|trainer.py:1728] 2024-05-16 01:36:41,436 >> Gradient Accumulation steps = 2
[INFO|trainer.py:1729] 2024-05-16 01:36:41,436 >> Total optimization steps = 280
[INFO|trainer.py:1730] 2024-05-16 01:36:41,438 >> Number of trainable parameters = 3,407,872

0%| | 0/280 [00:00<?, ?it/s]Traceback (most recent call last):
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 14, in
main()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 5, in main
run_exp()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 33, in run_exp
Traceback (most recent call last):
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 14, in

File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/sft/workflow.py", line 73, in run_sft
train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
main()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 5, in main
run_exp()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 33, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/sft/workflow.py", line 73, in run_sft
train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop
Traceback (most recent call last):
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 14, in
return inner_training_loop(
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2772, in training_step
main()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 5, in main
run_exp()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 33, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/sft/workflow.py", line 73, in run_sft
train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
tr_loss_step = self.training_step(model, inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2772, in training_step
Traceback (most recent call last):
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 14, in
loss = self.compute_loss(model, inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2795, in compute_loss
return inner_training_loop(
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop
main()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 5, in main
run_exp()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 33, in run_exp
loss = self.compute_loss(model, inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2795, in compute_loss
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/sft/workflow.py", line 73, in run_sft
train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
tr_loss_step = self.training_step(model, inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2772, in training_step
outputs = model(**inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return inner_training_loop(
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop
return self._call_impl(*args, **kwargs)outputs = model(**inputs)

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
loss = self.compute_loss(model, inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2795, in compute_loss
return forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
tr_loss_step = self.training_step(model, inputs)return self._call_impl(*args, **kwargs)

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2772, in training_step
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
outputs = model(**inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
loss = self.compute_loss(model, inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2795, in compute_loss
return self._call_impl(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
outputs = model(**inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
Traceback (most recent call last):
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 14, in
main()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 5, in main
run_exp()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 33, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/sft/workflow.py", line 73, in run_sft
train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
Traceback (most recent call last):
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 14, in
return inner_training_loop(
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop
main()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 5, in main
run_exp()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 33, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/sft/workflow.py", line 73, in run_sft
train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
tr_loss_step = self.training_step(model, inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2772, in training_step
Traceback (most recent call last):
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 14, in
return inner_training_loop(
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop
main()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/train.py", line 5, in main
run_exp()
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 33, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/data/caiqixu/Projects/xcy/LLaMA-Factory-main/src/llmtuner/train/sft/workflow.py", line 73, in run_sft
loss = self.compute_loss(model, inputs)
train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint) File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2795, in compute_loss

  File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train

tr_loss_step = self.training_step(model, inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2772, in training_step
return inner_training_loop(
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop
outputs = model(**inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
loss = self.compute_loss(model, inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2795, in compute_loss
tr_loss_step = self.training_step(model, inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2772, in training_step
return self._call_impl(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
outputs = model(**inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
loss = self.compute_loss(model, inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/trainer.py", line 2795, in compute_loss
return self._call_impl(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
outputs = model(**inputs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)ret_val = func(*args, **kwargs)ret_val = func(*args, **kwargs)ret_val = func(*args, **kwargs)ret_val = func(*args, **kwargs)ret_val = func(*args, **kwargs)ret_val = func(*args, **kwargs)

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1852, in forward
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1852, in forward
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1852, in forward
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1852, in forward
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1852, in forward
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1852, in forward
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1852, in forward
loss = self.module(*inputs, **kwargs)loss = self.module(*inputs, **kwargs)

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
loss = self.module(*inputs, **kwargs) loss = self.module(*inputs, **kwargs) loss = self.module(*inputs, **kwargs)
loss = self.module(*inputs, **kwargs)
loss = self.module(*inputs, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
return self._call_impl(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
return self._call_impl(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
return self._call_impl(*args, **kwargs) result = forward_call(*args, **kwargs)result = forward_call(*args, **kwargs)
return self._call_impl(*args, **kwargs)

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl

return self._call_impl(*args, **kwargs) File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/peft_model.py", line 1129, in forward
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/peft_model.py", line 1129, in forward
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
result = forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/peft_model.py", line 1129, in forward
result = forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/peft_model.py", line 1129, in forward
result = forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/peft_model.py", line 1129, in forward
return self.base_model( result = forward_call(*args, **kwargs)
return self.base_model(
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/peft_model.py", line 1129, in forward
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self.base_model(
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self.base_model(
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
result = forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/peft_model.py", line 1129, in forward
return self.base_model(
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
return self._call_impl(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
return self.base_model(return self._call_impl(*args, **kwargs)

  File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
return self._call_impl(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
return self._call_impl(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
result = forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 161, in forward
result = forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 161, in forward
return self.model.forward(*args, **kwargs)return self.base_model( return self._call_impl(*args, **kwargs)

result = forward_call(*args, **kwargs)
return self.model.forward(*args, **kwargs) File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1215, in forward
result = forward_call(*args, **kwargs) File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 161, in forward
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1215, in forward
result = forward_call(*args, **kwargs) File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 161, in forward

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 161, in forward
return self.model.forward(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1215, in forward
return self.model.forward(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1215, in forward
return self.model.forward(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1215, in forward
loss = loss_fct(shift_logits, shift_labels)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
result = forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 161, in forward
loss = loss_fct(shift_logits, shift_labels)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
loss = loss_fct(shift_logits, shift_labels)return self.model.forward(*args, **kwargs)

  File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
  File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1215, in forward

loss = loss_fct(shift_logits, shift_labels)loss = loss_fct(shift_logits, shift_labels)

return self._call_impl(*args, **kwargs) File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
return self._call_impl(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return self._call_impl(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return self._call_impl(*args, **kwargs)
loss = loss_fct(shift_logits, shift_labels) File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 1179, in forward
return forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 1179, in forward
result = forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 161, in forward
return forward_call(*args, **kwargs)
return self._call_impl(*args, **kwargs) File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 1179, in forward

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return F.cross_entropy(input, target, weight=self.weight,return forward_call(*args, **kwargs)

return forward_call(*args, **kwargs) File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/functional.py", line 3053, in cross_entropy
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 1179, in forward

return F.cross_entropy(input, target, weight=self.weight,return self.model.forward(*args, **kwargs) File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 1179, in forward

File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/functional.py", line 3053, in cross_entropy
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1215, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/functional.py", line 3053, in cross_entropy
return forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 1179, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/functional.py", line 3053, in cross_entropy
return F.cross_entropy(input, target, weight=self.weight,
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/functional.py", line 3053, in cross_entropy
return F.cross_entropy(input, target, weight=self.weight,
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/functional.py", line 3053, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
ValueErrorloss = loss_fct(shift_logits, shift_labels):
Expected input batch_size (255) to match target batch_size (55). File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl

return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)

ValueError: Expected input batch_size (495) to match target batch_size (55).
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
ValueError: Expected input batch_size (143) to match target batch_size (15).return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)

ValueError: Expected input batch_size (79) to match target batch_size (47).
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
ValueErrorreturn torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) :
return self._call_impl(*args, **kwargs)Expected input batch_size (487) to match target batch_size (47).
ValueError
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
: Expected input batch_size (103) to match target batch_size (95).
return forward_call(*args, **kwargs)
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 1179, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/caiqixu/Miniconda3/envs/xt/lib/python3.10/site-packages/torch/nn/functional.py", line 3053, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
ValueError: Expected input batch_size (511) to match target batch_size (23).

@hiyouga hiyouga added the wontfix This will not be worked on label May 29, 2024
@hiyouga hiyouga closed this as not planned Won't fix, can't repro, duplicate, stale May 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

2 participants