Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpInfo for Slice #85314

Closed
wants to merge 4 commits into from
Closed

Conversation

Krovatkin
Copy link
Contributor

This is based on @wconstab tests from #84680

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 20, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/85314

Note: Links to docs will display an error until the docs builds have been completed.

❌ 5 Failures, 2 Pending

As of commit b7e8dc1:

The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@@ -12774,6 +12813,13 @@ def reference_flatten(input, start_dim=0, end_dim=-1):
supports_forward_ad=True,
supports_fwgrad_bwgrad=True,
supports_out=False),
OpInfo('slice',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test failures are real, though, because torch.slice doesn't exist.

You can run many of the tests locally with python test/test_ops.py -v -k slice to see what they're doing.

I think this OpInfo needs to define its op metadata. This is sometimes done to wrap an operation and set a seed so it can be used for reproducible testing, like with torch.uniform:

op=lambda inp, *args, **kwargs: wrapper_set_seed(torch.Tensor.uniform_, inp, *args, **kwargs),

It's also done when there's a torch method but not a torch function for an operation, like with contiguous:

op=lambda x, *args, **kwargs: x.contiguous(*args, **kwargs),

And for more exotic cases, like creating a jiterated op:

op=torch.cuda.jiterator._create_jit_fn("template <typename T> T unary(T x) { return x * x + x; }"),

@ezyang
Copy link
Contributor

ezyang commented Sep 23, 2022

I'm commandeering this PR so I can turn it into a stack

@ezyang
Copy link
Contributor

ezyang commented Sep 23, 2022

commandeered at #85554

@ezyang ezyang closed this Sep 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants