Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make fbgemm::jagged_index_select pt2_compliant #2170

Closed
wants to merge 5 commits into from

Conversation

zou3519
Copy link
Contributor

@zou3519 zou3519 commented Nov 29, 2023

Summary:

  • The problem with the original op was that in the Autograd implementation, it
    needed to call Tensor.item(). This doesn't work with FakeTensors (maybe it
    can some day in the future).
  • We create two new ops, jagged_index_select_2d_forward_v2 and
    jagged_index_add_2d_forward_v2 (which is effectively the backward) that do
    the Tensor.item() calls, and change fbgemm::jagged_index_select's Autograd
    implementation to call those.
  • We add abstract impls for those two new ops.
  • Finally, we move the fbgemm::jagged_index_select implementation to
    CompositeImplicitAutograd (and delete the CPU/CUDA impls, because those are
    redundant).

Differential Revision: D51670069

Copy link

netlify bot commented Nov 29, 2023

Deploy Preview for pytorch-fbgemm-docs canceled.

Name Link
🔨 Latest commit a5e0137
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/6567888f58008900084993aa

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D51670069

Summary:

It needed an abstract impl.

Reviewed By: williamwen42

Differential Revision: D51647394
Summary:

See title.

Reviewed By: williamwen42

Differential Revision: D51647393
Summary:

- Added abstract impl in sparse_ops.py. I don't think it's worth splitting up
  the abstract impls into multiple .py files right now, unless someone comes to
  us with memory issues.

Reviewed By: williamwen42

Differential Revision: D51647392
Summary:

Also deleted two skips that were marked as flaky. Those don't appear to
actually be flaky.

Reviewed By: williamwen42

Differential Revision: D51647391
Summary:

- The problem with the original op was that in the Autograd implementation, it
  needed to call Tensor.item(). This doesn't work with FakeTensors (maybe it
  can some day in the future).
- We create two new ops, `jagged_index_select_2d_forward_v2` and
  `jagged_index_add_2d_forward_v2` (which is effectively the backward) that do
  the Tensor.item() calls, and change fbgemm::jagged_index_select's Autograd
  implementation to call those.
- We add abstract impls for those two new ops.
- Finally, we move the fbgemm::jagged_index_select implementation to
  CompositeImplicitAutograd (and delete the CPU/CUDA impls, because those are
  redundant).

Differential Revision: D51670069
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D51670069

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D51670069

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 1c40928.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants