Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LLM] Support prefix tuning and lora for qwen2 #8601

Merged
merged 15 commits into from
Jun 20, 2024

Conversation

DrownFish19
Copy link
Collaborator

@DrownFish19 DrownFish19 commented Jun 13, 2024

PR types

Function optimization

PR changes

Models

Description

  1. support prefix tuning and lora;
  2. fix modeling and tokenizer when tie_word_embedding=True (Qwen1.5-0.5B, Qwen2-0.5B, Qwen2-1.5B);
  3. fix pipeline and sequence parallel;
  4. add unittest;
  5. add llm unittest;
  6. upload pretrain, sft, and lora configs.

Copy link

paddle-bot bot commented Jun 13, 2024

Thanks for your contribution!

Copy link

codecov bot commented Jun 14, 2024

Codecov Report

Attention: Patch coverage is 52.32558% with 41 lines in your changes missing coverage. Please review.

Project coverage is 54.73%. Comparing base (970b868) to head (cb19d31).

Files Patch % Lines
paddlenlp/transformers/qwen2/modeling_pp.py 34.37% 21 Missing ⚠️
paddlenlp/transformers/qwen2/modeling.py 68.42% 12 Missing ⚠️
paddlenlp/transformers/qwen2/tokenizer.py 41.66% 7 Missing ⚠️
paddlenlp/transformers/model_utils.py 50.00% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #8601      +/-   ##
===========================================
+ Coverage    54.18%   54.73%   +0.55%     
===========================================
  Files          625      625              
  Lines        98942    98985      +43     
===========================================
+ Hits         53612    54180     +568     
+ Misses       45330    44805     -525     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@DrownFish19 DrownFish19 changed the title [LLM] Add unittest for Qwen2 [LLM] suport prefix tuning and lora for Qwen2 Jun 17, 2024
@DrownFish19 DrownFish19 changed the title [LLM] suport prefix tuning and lora for Qwen2 [LLM] Support prefix tuning and lora for qwen2 Jun 17, 2024
# However, when state dict only contains the one piece of shared parameters, the shared parameters
# will be different from the original shared parameters.

if isinstance(model, PipelineLayer):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

赞👍🏻

Copy link
Collaborator

@ZHUI ZHUI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ZHUI ZHUI merged commit da8b9ac into PaddlePaddle:develop Jun 20, 2024
8 of 11 checks passed
@DrownFish19 DrownFish19 deleted the dev_add_tests_qwen2 branch June 20, 2024 03:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants