Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LLM] Support fuse attention q, k, v weights #8202

Merged
merged 29 commits into from Apr 25, 2024

Conversation

DrownFish19
Copy link
Collaborator

@DrownFish19 DrownFish19 commented Mar 28, 2024

PR types

New features

PR changes

APIs and Models

Description

  1. Support fuse attention q, k, v weights
  • fuse_attention_qkv
image
  • fuse_attention_ffn
    normal fuse weights
  1. 性能测试(llama-7b)
分布式 原始加载时间 fuse加载时间
DP 20 23
PP4 5 7
TP2 10 23

ziangqin-baidu and others added 4 commits March 18, 2024 08:06
1.1. modify 1., code order

2. switch to name_mapping

3. solve tp branch

3.2 follow hui, handel qkv separately

3.3 handle pdparams

3.4 from torch

3.5 abandon low_cpu_mem_usage

3.6 solve shard branch
Copy link

paddle-bot bot commented Mar 28, 2024

Thanks for your contribution!

Copy link

codecov bot commented Mar 28, 2024

Codecov Report

Attention: Patch coverage is 94.34783% with 13 lines in your changes are missing coverage. Please review.

Project coverage is 55.45%. Comparing base (beb433a) to head (f6f3b0e).
Report is 2 commits behind head on develop.

Files Patch % Lines
paddlenlp/transformers/conversion_utils.py 94.82% 6 Missing ⚠️
paddlenlp/transformers/opt/modeling.py 81.81% 4 Missing ⚠️
paddlenlp/transformers/model_utils.py 91.89% 3 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #8202      +/-   ##
===========================================
+ Coverage    55.33%   55.45%   +0.12%     
===========================================
  Files          614      614              
  Lines        95341    95570     +229     
===========================================
+ Hits         52753    52999     +246     
+ Misses       42588    42571      -17     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@DrownFish19 DrownFish19 force-pushed the dev-fuse-qkv branch 5 times, most recently from 0fd4d1f to 753c980 Compare April 1, 2024 11:25
@DrownFish19 DrownFish19 marked this pull request as ready for review April 2, 2024 12:04
paddlenlp/transformers/conversion_utils.py Outdated Show resolved Hide resolved
paddlenlp/transformers/conversion_utils.py Outdated Show resolved Hide resolved
paddlenlp/transformers/gpt/modeling.py Outdated Show resolved Hide resolved
@ZHUI
Copy link
Contributor

ZHUI commented Apr 17, 2024

简单测试 llama 7b 加载速度。

@ZHUI
Copy link
Contributor

ZHUI commented Apr 17, 2024

LoRA 适配

Copy link
Contributor

@ZHUI ZHUI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ZHUI ZHUI merged commit f29a7b9 into PaddlePaddle:develop Apr 25, 2024
8 of 10 checks passed
@DrownFish19 DrownFish19 deleted the dev-fuse-qkv branch April 29, 2024 08:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants