-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Frontend Failing Test: torch - reduction_ops.torch.mean #28278
Conversation
Hi, @Ishticode - all tests passing! Core and torch frontend alike! :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you fix the demos conflicts. And will wait on a freshi ci run. Currently its showing way too many failures. Thanks for the patience.
On it! The CI failures are most likely due to the updates to docs/demos made recently- and the PR still contains the old docs/demos. UPDATE: I might be wrong but it also seems that a lot of frontends were added recently. |
66075e1
to
698fde3
Compare
Hey @vedpatwardhan , there's a surprisingly large number of failed tests in workflows, even though the frontend and core ivy tests passed- any suggestions? A lot (I can't say if all) of the failures seem unrelated to the |
I think it would be worth checking how many of those functions (for which the tests are failing) depend on |
This PR has been labelled as stale because it has been inactive for more than 7 days. If you would like to continue working on this PR, then please add another comment or this PR will be closed in 7 days. |
Hey @vedpatwardhan , I'll try to go through the tests which fail by today's evening and let you know which of them use ivy.mean() in either the frontend or the backend. EDIT: |
Listed failures which in any way call
I think I got at the very least most of the functions which use |
@Ishticode would you mind checking the message right above as well and weighing in on the issue? :) |
Hi @Kacper-W-Kozdon |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to me the ci have picked more than it needs to. The essential tests relevant like test_mean and test_torch_mean are fine and besides the changes are simple enough to just add dtype parameter which if specified, the input is casted to before the actual mean call.
The changes look good to me. I will go ahead and merge this. Will keep an eye on any relevant failures. Thank you very much @Kacper-W-Kozdon for you effort, patience and dilligence. Its much appreciated. 🙂
PR Description
torch.mean() from the torch frontend does not seem to be properly implemented. ivy.mean() has missing arguments (dtype). Comparison against the tensorflow's ground truth might be done with the wrong tensorflow's function, as tensorflow seems to only have tf.reduce_mean(), which works slightly differently from torch.mean()- adding mean() to tf backend might be needed.
torch.mean()
with thedtype
argument being assigned a value (type-casting before mean).@with_supported_dtypes
. This was one of the reasons for the failing tests, ascomplex
dtypes were substituted with 'float32' even though the associatedinput
was still complex.dtype
argument in backends. This was one of the reasons for failing tests.Related Issue
Closes #28276
Checklist
Socials