Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Distributed] enable tensor_parallel_output for finetuning #8370

Merged

Conversation

SylarTiaNII
Copy link
Contributor

PR types

Bug fixes(Performance optimization)

PR changes

Others

Description

enable tensor_parallel_output as default for better performance

Copy link

paddle-bot bot commented May 7, 2024

Thanks for your contribution!

Copy link

codecov bot commented May 7, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 55.36%. Comparing base (9146c1e) to head (88b1da4).
Report is 2 commits behind head on develop.

❗ Current head 88b1da4 differs from pull request most recent head 176891c. Consider uploading reports for the commit 176891c to get more accurate results

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #8370      +/-   ##
===========================================
- Coverage    55.43%   55.36%   -0.07%     
===========================================
  Files          616      614       -2     
  Lines        96229    96016     -213     
===========================================
- Hits         53346    53164     -182     
+ Misses       42883    42852      -31     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@@ -152,7 +152,7 @@ def main():
# NOTE(gongenlei): new add autotuner_benchmark
model_config = AutoConfig.from_pretrained(
model_args.model_name_or_path,
tensor_parallel_output=False,
tensor_parallel_output=True,
Copy link
Collaborator

@wawltor wawltor May 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tensor_parallel_output=True主要是为了加速吗?

tensor_parallel_output=True 设置为True会导致模型指标ACC计算出错,因为没有对结果进行all gather操作

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果不设置为True的话,一个是影响性能,再一个是影响显存占用。在llm场景会有比较大的性能影响。模型指标ACC的计算是不是可以考虑做一下相应的优化来适配mp场景?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里可以加开关,建议默认还是False,generation 那里还没有适配,会有问题。

@SylarTiaNII SylarTiaNII force-pushed the enable_tensor_parallel_output branch 5 times, most recently from 8fd9ff9 to d162d0c Compare May 10, 2024 11:32
@@ -152,7 +152,7 @@ def main():
# NOTE(gongenlei): new add autotuner_benchmark
model_config = AutoConfig.from_pretrained(
model_args.model_name_or_path,
tensor_parallel_output=False,
tensor_parallel_output=True,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里可以加开关,建议默认还是False,generation 那里还没有适配,会有问题。

@@ -2780,6 +2780,12 @@ def evaluation_loop(

# Metrics!
if self.compute_metrics is not None and all_preds is not None and all_labels is not None:
if self.args.tensor_parallel_degree > 1 and all_preds.shape != all_labels.shape:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if self.args.tensor_parallel_degree > 1 and all_preds.shape != all_labels.shape:
if self.args.tensor_parallel_degree > 1 and isinstance(all_preds, paddle.Tensor) all_preds.shape != all_labels.shape:

然后这里加一个注释吧,all_gather logits for tp

@@ -2780,6 +2780,12 @@ def evaluation_loop(

# Metrics!
if self.compute_metrics is not None and all_preds is not None and all_labels is not None:
if self.args.tensor_parallel_degree > 1 and all_preds.shape != all_labels.shape:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

应该在https://github.com/PaddlePaddle/PaddleNLP/blob/develop/llm/utils.py#L208 CausalLMTrainer这里对logit加上all gather的操作,而不是在这

@SylarTiaNII SylarTiaNII force-pushed the enable_tensor_parallel_output branch 2 times, most recently from 9a8b420 to eaf6453 Compare May 10, 2024 12:36
@SylarTiaNII SylarTiaNII force-pushed the enable_tensor_parallel_output branch from eaf6453 to 176891c Compare May 10, 2024 12:38
Copy link
Collaborator

@wawltor wawltor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wawltor wawltor merged commit c6e5459 into PaddlePaddle:develop May 10, 2024
6 of 9 checks passed
wawltor pushed a commit that referenced this pull request May 24, 2024
* [XPU] llama add xpu support (#8282)

* [XPU] llama add xpu support

* fix

* use try import

* fix

* refine

* refine

* refine

* refine

* update (#8399)

* [LLM] Support fuse attention q, k, v weights  (#8202)

1. add use-interface & fuse action

1.1. modify 1., code order

2. switch to name_mapping

3. solve tp branch

3.2 follow hui, handel qkv separately

3.3 handle pdparams

3.4 from torch

3.5 abandon low_cpu_mem_usage

3.6 solve shard branch

* 3.6.1 solve shard branch after rebase develop

* code clean

* remove debug comment

* Redefine fuse and split functions

* Redefine fuse and split functions

* comment and fix

* update method

* update QKV fuse and split

* support fuse weights in multi-files

* add precision compare

* simplify function call

* support use_fast_ffn

* clean modeling and configuration

* add test for gpt and opt

* fix tp_actions get

* add fast_ffn test

* add Qwen2Moe

* Revert "add Qwen2Moe"

This reverts commit 113b883.

* add test for split

* update doc

* update filter_dict_keys

---------

Co-authored-by: Zii <ziangqin.baidu@gmail.com>

* [LLM] Fix fuse or split with same key (#8378)

* fix fuse or split with same key

* fix

* fix eps

* update format

* [LLM] add decay steps option for finetuning (#8251)

* [LLM] add memory stats to logger of trainer (#8269)

* [Distributed] fix lora (#8325)

* [LLM] fix lora target modules on llama (#8372)

* [Distributed] metric calculation supports tp logits (#8370)

* Update model_utils.py

* Update model_utils.py

* Update model_utils.py

---------

Co-authored-by: Jianbang Yang <yangjianbang112@gmail.com>
Co-authored-by: DrownFish19 <DrownFish19@gmail.com>
Co-authored-by: Zii <ziangqin.baidu@gmail.com>
Co-authored-by: Tian <121000916+SylarTiaNII@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants