-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add cumulative_trapezoid to PyTorch frontend #23749
feat: add cumulative_trapezoid to PyTorch frontend #23749
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, @it-doesnt-matter , thanks for contributing! All the tests are passing. However, there're some minor fixes needed:
- This function accepts two tensors, so I would suggest you to use
promote_types_of_torch_inputs
. Context - https://unify.ai/docs/ivy/overview/deep_dive/ivy_frontends.html#frontend-data-type-promotion-rules. - Testing doesn't seem to be exhausting enough for me, because you're only testing this function with a float tensors.
Feel free to reach me out in case smth seems unclear. Good luck!
@illia-bab Regarding your second point: |
@it-doesnt-matter My point regarding your question: you can modify |
@illia-bab Thanks for the tips! |
@it-doesnt-matter Great changes! However I would suggest you to add |
@illia-bab PyTorch in general and cumulative_trapezoid do support bfloat16 as far as I can tell. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, @it-doesnt-matter , thanks for questions:
Regarding unsupported dtypes: If uint16
, uint32
and uint64
are not supported by PyTorch
frontend, ideally the testing pipeline should automatically detect it and we don't need to specify them within with_unsupported_dtypes
decorator. However, this doesn't work as expected, so let's include them in a decorator for now and I'll pass this issue to a testing team.
Regarding bfloat16
dtype: The implementation you provide is compositional and get_item
function which is called during array indexing doesn't support this dtype with paddle
backend, hence, causing errors. I would recommend you to explicitly cast bfloat16
to the nearest supported dtype to make implementation valid for paddle
backend and add a ToDo
comment to remove it in the future.
Fork: I've also noticed your fork is 333 commits behind, please sync it.
Feel free to reach me out in case you have any other questions left. Thanks and good luck!
Thank you for this PR, here is the CI results: This pull request does not result in any additional test failures. Congratulations! |
|
This PR has been labelled as stale because it has been inactive for more than 7 days. If you would like to continue working on this PR, then please add another comment or this PR will be closed in 7 days. |
Thanks so much for the contribution! @it-doesnt-matter |
Hi @it-doesnt-matter , Thanks for the PR! However, it looks like it has been inactive for a while. Therefore, I’ve simply closed it for now. 🙂 Please feel free to submit other PRs based on our Open Tasks. Thanks! |
Close #23747