-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance sequence_expand operator #9100
Conversation
AddComment(R"DOC( | ||
Sequence Expand Operator. | ||
|
||
This operator expands input(X) according to LOD of input(Y). | ||
This operator expands `X` according to specified level lod of `Y`. Current | ||
implementation constaints that lod level of `X` should be at most 1. Attribute |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
constaints -> requires
I feel the semantics of this op very difficult to understand. Is there an op in other framework that has the same effect?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so, since LoDTensor is a peculiar concept in PaddlePaddle and this operator is dependent on the LoD of input variable.
and input(Y) | ||
Y.lod = [[0, 2, 4], | ||
[0, 3, 6, 6, 8]] | ||
ref_level: 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what would happen if this if -1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When set ref_level
to -1, the LoD size would be inconsistent between [0, 1, 4] and [0, 3, 6, 6, 8].
[0, 3, 6, 6, 8]] | ||
ref_level: 0 | ||
then we get 1-level LoDTensor | ||
Out.lod = [[0, 2, 5, 8]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out.lod[0][2:8]是由X.lod[0][1:4] repeated 2 times 而来的?
那么, X.lod[0][0:1] 是不是应该repeated 2 times而成 : Out.lod = [[0, 1, 2, 5, 8]] ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, done.
Out.lod = [[0, 2, 3, 6]] | ||
Out.data = [a, a, b, c, c, c] | ||
ref_level: -1 | ||
then we a common Tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we get
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(LodTensor, default LoDTensor) Output LoDTensor which is generated from Input(X) by referring lod of Input(Y).
输出应该都是LoDTensor吧?同时给出Out.lod ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we get
done.
Out is a Tensor.
Out.lod = [[0, 2, 3, 6]] | ||
Out.data = [[a,b], [a,b] [c,d], [e, f], [e, f], [e, f]] | ||
ref_level: 0 | ||
then we get a common LoDTensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The LoD of output tensor should be given here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out is a Tensor.
int ref_level = context.Attr<int>("ref_level"); | ||
auto& x_lod = x->lod(); | ||
auto& y_lod = y->lod(); | ||
PADDLE_ENFORCE_GT(y_lod.size(), 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
y_lod.size()
and y_lod.size()
have been checked in runtime InferShape
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Resolves #9049