New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add cumsum prim backward #50565
add cumsum prim backward #50565
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
由 #49518 引入的bug: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add cinn test maybe
see https://github.com/PaddlePaddle/CINN/blob/develop/cinn/frontend/op_mappers/paddle/cumsum.cc Compared with the paddle cumsum operator, CINN's cumsum operator lacks some parameters, and it cannot be aligned when connected to CINN. Consider adding operator mapping or enhancing the CINN cumsum operator in the future. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one comment
|
||
return res | ||
|
||
# def test_cinn(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
leave to do here and remove this test, no commented code should be pushed.
… add_cumsum_grad_for_prim
… add_cumsum_grad_for_prim
self.attrs = {'axis': 2} | ||
self.inputs = {'X': np.random.random((5, 6, 10)).astype("float64")} | ||
self.outputs = {'Out': self.inputs['X'].cumsum(axis=2)} | ||
|
||
def test_check_output(self): | ||
self.check_output() | ||
self.check_output(check_prim=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
基础算子没有前向拆分规则,不需要在前向测试中加上组合规则测试,可以去掉这个参数。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已去掉,下同
@@ -138,35 +144,41 @@ def setUp(self): | |||
} | |||
|
|||
def test_check_output(self): | |||
self.check_output() | |||
self.check_output(check_prim=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
self.attrs = {'axis': 1} | ||
self.inputs = {'X': np.random.random((5, 6, 10)).astype("float64")} | ||
self.outputs = {'Out': self.inputs['X'].cumsum(axis=1)} | ||
|
||
def test_check_output(self): | ||
self.check_output() | ||
self.check_output(check_prim=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
self.attrs = {'axis': 0} | ||
self.inputs = {'X': np.random.random((5, 6, 10)).astype("float64")} | ||
self.outputs = {'Out': self.inputs['X'].cumsum(axis=0)} | ||
|
||
def test_check_output(self): | ||
self.check_output() | ||
self.check_output(check_prim=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
self.inputs = {'X': np.random.random((5, 20)).astype("float64")} | ||
self.outputs = {'Out': self.inputs['X'].cumsum(axis=1)} | ||
|
||
def test_check_output(self): | ||
self.check_output() | ||
self.check_output(check_prim=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
paddle/phi/api/yaml/op_compat.yaml
Outdated
attrs : | ||
axis : axis |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name相同的属性这里的映射不用配置
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for op_compat
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one comment
bool flatten = static_cast<bool>(this->Attr<bool>("flatten")); | ||
bool exclusive = static_cast<bool>(this->Attr<bool>("exclusive")); | ||
bool reverse = static_cast<bool>(this->Attr<bool>("reverse")); | ||
VLOG(6) << "Runing add_grad composite func"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cumsum?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, fix problem in next PR
PR types
Others
PR changes
OPs
Describe
add cumsum prim backward