New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[XPU] llama add xpu support #8282
Conversation
Thanks for your contribution! |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #8282 +/- ##
===========================================
+ Coverage 55.25% 55.35% +0.10%
===========================================
Files 613 614 +1
Lines 95626 95924 +298
===========================================
+ Hits 52837 53103 +266
- Misses 42789 42821 +32 ☔ View full report in Codecov by Sentry. |
LGTM |
x = paddle.to_tensor(0.0, dtype=dtype) | ||
y = paddle.to_tensor(paddle.finfo(dtype).min, dtype=dtype) | ||
expanded_attn_mask = expanded_attn_mask.astype(dtype) | ||
expanded_attn_mask = paddle.where(expanded_attn_mask, x, y).astype(dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
当传入的x
和y
是整型scalar类型时,paddle.where
会将其视为int64、形状[1]的tensor,并会进行broadcast_add操作,详见search.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
LGTM |
LinearConfig.enable_accumulate_steps_opt() | ||
LinearConfig.set_accumulate_steps(training_args.gradient_accumulation_steps) | ||
except ImportError: | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是做什么的?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
XPU针对accumulate_steps > 1的场景进行优化,配合下面的paddle_xpu里面的Linear层进行使用
x = paddle.to_tensor(0.0, dtype=dtype) | ||
y = paddle.to_tensor(paddle.finfo(dtype).min, dtype=dtype) | ||
expanded_attn_mask = expanded_attn_mask.astype(dtype) | ||
expanded_attn_mask = paddle.where(expanded_attn_mask, x, y).astype(dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里和上面 npu 的逻辑看着差不多,可以复用吗?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
理论上是可以复用的,但是npu里面写死了dtype是float16
,xpu跑的程序是可能是float16
,也可能是bfloat16
的。我们需要修改npu的模块么?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SylarTiaNII 看一下?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
根据 @wuhuachaocoding 意见,还是分成if elif两个单独的分支
logits = self.xpu_parallel_matmul( | ||
hidden_states, self.weight, tensor_parallel_output=tensor_parallel_output, training=self.training |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
training 参数必须要吗?如果参数能一样的话,是不是 把 parallel_matmul 的实现在xpu下替换就好了?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里面有两个原因:
- XPU的一个优化是需要将parallel_matmul作为一个对象来存储某些状态
- XPU需要
training
信息来进行优化
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
New features
PR changes
Models
Description
paddle_xpu
(aka.fast_paddle
)