Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nanquantile #20888

Merged
merged 10 commits into from
Aug 7, 2023
Merged

nanquantile #20888

merged 10 commits into from
Aug 7, 2023

Conversation

aryankhatana01
Copy link
Contributor

No description provided.

@aryankhatana01
Copy link
Contributor Author

#20890

@ivy-leaves ivy-leaves added the PyTorch Frontend Developing the PyTorch Frontend, checklist triggered by commenting add_frontend_checklist label Jul 27, 2023
@ivy-leaves
Copy link

If you are working on an open task, please edit the PR description to link to the issue you've created.

For more information, please check ToDo List Issues Guide.

Thank you 🤗

Copy link
Contributor

@sherry30 sherry30 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Requested 2 changes, can you get the tests to pass for this function.

Also its weird that there was no test for nanquantile in the experimental API, Thanks for adding that too 😄

Copy link
Contributor

@sherry30 sherry30 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 change requested



nanquantile.unsupported_dtypes = {
"torch": ("float16", "bfloat16"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually can you add this as a decorator on the function.

@with_unsupported_dtypes(
    {"2.0.1 and below": ("float16", "bfloat16")}, "torch"
)

Sorry for making you change this the 2nd time 😅

Copy link
Contributor

@sherry30 sherry30 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Requested 1 change.

Can you get the test passing if they fail after that.
Feel free to ask me anything if you get stuck. 😄

Thanks

test_gradients=st.just(False),
)
def test_nanquantile(
*, dtype_and_x, keep_dims, test_flags, backend_fw, fn_name, on_device
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dtype_and_x should be dtype_x_axis because that's how its defined above

Copy link
Contributor

@sherry30 sherry30 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pointed out 1 thing.
You can check the intelligent tests run in this PR and see if the tests are failing and you can see the failure. You'll get a good understanding of why is it failing form that.

Thanks

Copy link
Contributor

@sherry30 sherry30 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi,

So one thing I realized, there is no backend implementation of ivy.nanquantile 😅, its just a hollow function with no implementation, This is a mistake on our end, I'll create a task for this and someone from the engineering team can look into it.
I'll merge your PR since you've written tests and torch frontend function is just supposed to call ivy.nanquantile.
I've requested 1 change, I'll merge it after you've made that change
Thanks

@sherry30 sherry30 merged commit 2b1d4ce into ivy-llc:master Aug 7, 2023
220 of 266 checks passed
@KareemMAX KareemMAX mentioned this pull request Aug 7, 2023
arshPratap pushed a commit to arshPratap/ivy that referenced this pull request Sep 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
PyTorch Frontend Developing the PyTorch Frontend, checklist triggered by commenting add_frontend_checklist
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants