Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add cumulative_trapezoid to PyTorch frontend #23749

Conversation

it-doesnt-matter
Copy link
Contributor

Close #23747

@ivy-leaves ivy-leaves added the PyTorch Frontend Developing the PyTorch Frontend, checklist triggered by commenting add_frontend_checklist label Sep 17, 2023
Copy link
Contributor

@illia-bab illia-bab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, @it-doesnt-matter , thanks for contributing! All the tests are passing. However, there're some minor fixes needed:

  1. This function accepts two tensors, so I would suggest you to use promote_types_of_torch_inputs. Context - https://unify.ai/docs/ivy/overview/deep_dive/ivy_frontends.html#frontend-data-type-promotion-rules.
  2. Testing doesn't seem to be exhausting enough for me, because you're only testing this function with a float tensors.
    Feel free to reach me out in case smth seems unclear. Good luck!

@it-doesnt-matter
Copy link
Contributor Author

@illia-bab Regarding your second point:
I actually asked about this in another PR of mine (#23722 (comment)), but forgot to copy & paste the question here as well.
Is it okay if I modidify _get_dtype_and_matrices()? The only other function that uses it is torch.trapezoid which supports the same dtypes as torch.cumulative_trapezoid. Or is it better if I write a new function?
What about the number of dimensions? Is it okay to use only two-dimensional arrays or should the number of dimensions vary as well?

@ivy-seed ivy-seed assigned jieunboy0516 and unassigned illia-bab Sep 27, 2023
@illia-bab
Copy link
Contributor

illia-bab commented Sep 27, 2023

@it-doesnt-matter My point regarding your question: you can modify _get_dtype_and_matrices() as long as it doesn't break the testing of torch.trapezoid, so you will need to check this.
My point regarding your second question: I would suggest that if it works for 2D case, then by induction this should also work for other number of dimensions. However, this still should be tested.
So what I would suggest you to do: try modifying _get_dtype_and_matrices() to test for other dtypes as a sanity check. If everything works as expected, implement your own function to test the implementation for different number of dimensions to cover as much cases as possible. But before that, check data generation helpers in ivy to avoid re-implementing smth that already exists. You can also try checking data generating helper functions involved in testing ivy's functions.
Apart from this, dim should be a valid axis of y . Instead of hardcoding it to 0 or -1, you could use helpers.get_axis.
Good luck!

@it-doesnt-matter
Copy link
Contributor Author

@illia-bab Thanks for the tips!
I ended up writing a new function, which means that test_torch_trapezoid() still uses the old function and thus only tests with floats. I ran some local tests with other dtypes and some of them failed. I can fix trapezoid() and test_torch_trapezoid(), if you would like.

@illia-bab
Copy link
Contributor

@it-doesnt-matter Great changes! However I would suggest you to add bfloat16, uint16, uint32 and uint64 to with_unsupported_dtypes decorator, because PyTorch doesn't support them and this makes tests fail. And then we're ready to merge!
Regarding trapezoid function and corresponding tests: you can fix them, but this should be included in a separate PR. Feel free to request review from me.
Thanks and good luck!

@it-doesnt-matter
Copy link
Contributor Author

@illia-bab PyTorch in general and cumulative_trapezoid do support bfloat16 as far as I can tell.
Do I actually have to add uint16, uint32 and uint64 to with_unsupported_dtypes? Couldn't that be inferred by the fact that it's the PyTorch frontend and that PyTorch dosn't support those types?

Copy link
Contributor

@illia-bab illia-bab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, @it-doesnt-matter , thanks for questions:
Regarding unsupported dtypes: If uint16, uint32 and uint64 are not supported by PyTorch frontend, ideally the testing pipeline should automatically detect it and we don't need to specify them within with_unsupported_dtypes decorator. However, this doesn't work as expected, so let's include them in a decorator for now and I'll pass this issue to a testing team.
Regarding bfloat16 dtype: The implementation you provide is compositional and get_item function which is called during array indexing doesn't support this dtype with paddle backend, hence, causing errors. I would recommend you to explicitly cast bfloat16 to the nearest supported dtype to make implementation valid for paddle backend and add a ToDo comment to remove it in the future.
Fork: I've also noticed your fork is 333 commits behind, please sync it.
Feel free to reach me out in case you have any other questions left. Thanks and good luck!

@github-actions
Copy link
Contributor

Thank you for this PR, here is the CI results:


This pull request does not result in any additional test failures. Congratulations!

@it-doesnt-matter
Copy link
Contributor Author

@illia-bab

  • I added the bfloat16 conversion for the paddle compatibility
  • I didn't add uint16, uint32 or uint64 to with_unsupported_dtypes(), as I didn't have any issues without them and because the docs also say to not add them
  • the tests still sometimes fail when tensorflow is the backend

@ivy-seed
Copy link

ivy-seed commented Jan 2, 2024

This PR has been labelled as stale because it has been inactive for more than 7 days. If you would like to continue working on this PR, then please add another comment or this PR will be closed in 7 days.

@ivy-seed ivy-seed added the Stale label Jan 2, 2024
@aibenStunner aibenStunner self-assigned this Jan 5, 2024
@aibenStunner
Copy link
Contributor

Thanks so much for the contribution! @it-doesnt-matter
Tests for tensorflow backend fail sometimes due to precision mismatch.
Adjusting rtol and atol parameters accordingly could help resolve this.
Thanks.

@aibenStunner
Copy link
Contributor

Hi @it-doesnt-matter ,

Thanks for the PR!

However, it looks like it has been inactive for a while.

Therefore, I’ve simply closed it for now. 🙂

Please feel free to submit other PRs based on our Open Tasks.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
PyTorch Frontend Developing the PyTorch Frontend, checklist triggered by commenting add_frontend_checklist Stale
Projects
None yet
Development

Successfully merging this pull request may close these issues.

cumulative_trapezoid
10 participants