-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: added unflatten frontend and backend support #27416
feat: added unflatten frontend and backend support #27416
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Compliance Checks Passed!
Reformatting Task ChecklistIMPORTANT NOTICE 🚨:The Ivy Docs represent the ground truth for the task descriptions and this checklist should only be used as a supplementary item to aid with the review process. LEGEND 🗺:
CHECKS 📑:
|
The frontend for unflatten is still missing most of the backends. Some backends use aliases. I'll implement that next. |
Hi @Kacper-W-Kozdon, thanks for starting a contribution! Ping me when you are ready to get a review or if you have any questions. 😁 |
…23279) Co-authored-by: NripeshN <nripesh14@gmail.com>
Co-authored-by: NripeshN <nripesh14@gmail.com>
… and node_split from Splitter
Co-authored-by: NripeshN <nripesh14@gmail.com>
Co-authored-by: Bhushan Srivastava <59949692+he11owthere@users.noreply.github.com>
…on call (Transpile-AI#27428) Co-authored-by: NripeshN <nripesh14@gmail.com>
Co-authored-by: NripeshN <nripesh14@gmail.com>
…splitter function
There are still some problems with tests but splitting dtypes_values_axis() into dtype_and_values() + get_axis() seemingly helped with the dim issues. UPDATE: |
@joaozenobio the PR is ready for review. |
Here dim and shape can be both positional and keyword. Ex: In: torch.unflatten(input=torch.zeros((2, 4, 2)), dim=1, sizes=(2, 2)).shape Out: torch.Size([2, 2, 2, 2])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Kacper-W-Kozdon! Thanks for the updates 😄 I have tested your pytorch test and, if I did not got the wrong version of your code, the frontend test for torch did not pass. The error shown was:
x = tensor([[False, False]]), dim = 0, shape = (1, 2)
@handle_out_argument
def unflatten(
x: torch.Tensor,
/,
dim: int = 0,
shape: Tuple[int] = None,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
> res = torch.unflatten(x, dim, shape)
E RuntimeError: unflatten: Provided sizes [1, 2] don't multiply up to the size of dim 0 (1) in the input tensor
Could you please verify if the generation of this numbers are possible to be generated in your test?
Also, if you wish you can continue working with the other backends and also build a test function for testing the backend in this same PR 🚀
No, you have the right one and I already know where to look for the cause. If the dimension to unflatten has size 1, there's nothing to factorise and the original shape was passed as the argument instead. This means that I need to add a condition for the dimension value equal to 1 and that should solve this problem. I will also look into other backends this week, it shouldn't take too much work. At the very least numpy and tensorflow have some functions similar to unflatten. |
Awesome! When you are done making the necessary modifications just pin me 😄 And thanks for the initiative! I am going to tag your PR to reference the other backends too 🚀 |
Test fixed (technically it should cover all needs as is, but after adding the missing backends I might move factorization function to some more fitting category if I find one and just for completeness- instead of full factorization into primes, get a list with just partial factors), all that was required was changing factors = [] to factors = [1], so now shape never gets an empty list. Tensorflow backend for unflatten added- tensorflow doesn't have unflatten, so its behaviour was recreated with reshape. Numpy seems to have unflatten natively, so I only have yet to check whichever backend was left (jax, I think?) from the ones considered in pytest. I'll have the PR ready by the end of the day. |
Ready for review- all backends, pytorch frontend. |
Nice work @Kacper-W-Kozdon ! All tests are passing and the superset behavior was well managed. Your PR is good to go 😄 |
Hi @Kacper-W-Kozdon ! Your work with the frontend is great! With all the backends functions implemented, would you like to continue adding the features for this function implementing the Ivy Experimental API function and test and the data classes functions and tests, or would you like to finish our work here? |
Sure, I would happily look into that- but I'll have to get more familiar with the containers used by Ivy and how their methods are written. It shouldn't take too long but it'll be a touch slower than just the frontend implementation. Would it be possible to merge the current progress as is and open a new issue for the experimental API and classes? |
Sure thing! I'll merge your PR now and tag you in the new issue 😄 |
Hi @Kacper-W-Kozdon ! Sorry for the delay, here is the new issue you should bond to your PR related to the next steps on the unflatten implementation. Thanks for the hard work!! 🚀 |
feat: added unflatten Pytorch frontend and backend support
Currently added ivy.unflatten, unflatten for torch backend and unflatten in ivy/frontends/torch. Still requires unflatten (or the corresponding functions) in other backends.
Closes #22252
Checklist
Socials