Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support tensor attribute runtime #54692

Merged
merged 55 commits into from
Jun 19, 2023

Conversation

phlrain
Copy link
Collaborator

@phlrain phlrain commented Jun 15, 2023

PR types

Others

PR changes

Others

Description

支持运行时的可变attribute

新的静态图执行流程,希望能够cache context,包含(infer meta contet, kernel context), 由于context在构造的时候,scalar无法获取 DenseTensor的真实值,必须有一个延迟初始化scalar的逻辑; 在当前的流程和为了照顾之前program desc执行的需求, 在kernel的分发解决来延迟初始化scalar

Other

Pcard-67164

phlrain added 30 commits June 7, 2023 10:28
@paddle-bot
Copy link

paddle-bot bot commented Jun 15, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@@ -148,12 +148,15 @@ void BuildInferMetaContext(
VLOG(6) << "ctx->EmplaceBack mutable attr: " << t << "\t"
<< in_var_name;
if (mutable_attr_type_map[t] == "paddle::dialect::IntArrayAttribute") {
ctx->EmplaceBackAttr(phi::IntArray(
*(scope->Var(in_var_name)->GetMutable<phi::DenseTensor>())));
phi::Attribute r1 = phi::TensorRefScalar(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IntArray也会走到Scalar的分支吗?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里只是把tensor的指针放到了attribute里面, 在最终参数分发的时候,会用tensor* 构造对应的scalar或者IntArray

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

那感觉是TensorRefScalar这个名字跟IntArray有些不太匹配,Array是表示数组,Scalar是表示标量

static Attribute cmp_t = phi::TensorRefScalar(nullptr); \
attr_type attr1; \
if (cmp_t.index() == t.index()) { \
attr1 = attr_type((*paddle::get<phi::TensorRefScalar>(t).Get())); \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

编译期会执行这里的调用吗?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

编译期我理解不会的,运行的时候才会区分

}

private:
const DenseTensor* tensor_base_;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const DenseTensor* tensor_base_;
const DenseTensor* tensor_base_{nullptr};

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

class TensorRefScalar {
public:
// Constructor support implicit
TensorRefScalar() : tensor_base_(nullptr) {}
Copy link
Contributor

@Aurelius84 Aurelius84 Jun 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
TensorRefScalar() : tensor_base_(nullptr) {}
TensorRefScalar() = default;

另外这个 TensorRefScalar 其实是包含了 Scalar和IntArray 两种类型的处理的,这个名字似乎不是很好

@@ -135,6 +135,10 @@ const AttrType& InferMetaContext::AttrAt(size_t idx) const {
}
}

const Attribute& InferMetaContext::AttrAt(size_t idx) const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里我们是不是也不需要单独定义一个返回Attribute 的函数?在下面 PD_SPECIALIZE_KernelCallHelper_FOR_TENSOR_SCALAR_INTARRAY 宏里用try-catch 代替 if-else就好了?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

现在是两种情况的分支,用try catch还是可以的,如果有新的分支,try catch也是不好的

当前的思路是一个很trick的实现,长线是希望把这种可变 attribute 放到input里面,放到input里面就没有if else的问题了,但是为了兼容历史的问题(在新IR上线后, 这种attibute只会作为tensor出现,但是从现状来看,会作为tesor或者int 出现), 只能暂且这么实现

@@ -148,12 +148,15 @@ void BuildInferMetaContext(
VLOG(6) << "ctx->EmplaceBack mutable attr: " << t << "\t"
<< in_var_name;
if (mutable_attr_type_map[t] == "paddle::dialect::IntArrayAttribute") {
ctx->EmplaceBackAttr(phi::IntArray(
*(scope->Var(in_var_name)->GetMutable<phi::DenseTensor>())));
phi::Attribute r1 = phi::TensorRefScalar(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

那感觉是TensorRefScalar这个名字跟IntArray有些不太匹配,Array是表示数组,Scalar是表示标量

Comment on lines 253 to 254
phi::Attribute r1 = phi::TensorRefScalar(
&(scope->Var(in_var_name)->Get<phi::DenseTensor>()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cache KernelContext的时候这里 scope->Var(in_var_name) 拿到的是空 Tensor 吗?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是一个空tensor,但是这个tensor的指针不会改变的

true);
EXPECT_EQ(block->size(), 9u);

auto kernel_program = paddle::dialect::PdOpLowerToKernelPass(&program);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个单测是没有测Scalar的场景吗?

@phlrain phlrain merged commit 93f7a02 into PaddlePaddle:develop Jun 19, 2023
25 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants