Skip to content

add OpInfo for torch.nn.functional.grid_sample #62311

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 6 commits into from

Conversation

pmeier
Copy link
Collaborator

@pmeier pmeier commented Jul 28, 2021

@pmeier pmeier added module: nn Related to torch.nn module: testing Issues related to the torch.testing module (not tests) labels Jul 28, 2021
@pmeier pmeier requested review from zou3519 and mruberry July 28, 2021 07:17
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jul 28, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 40c9b6d (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-scanned failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@pmeier
Copy link
Collaborator Author

pmeier commented Jul 28, 2021

@bdhirsh bdhirsh added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jul 28, 2021
@mruberry
Copy link
Collaborator

Test failures look real:

test_fn_grad_nn_functional_grid_sample_cuda_float64

  File "/opt/conda/lib/python3.9/site-packages/torch/autograd/gradcheck.py", line 544, in _check_analytical_jacobian_attributes
    raise GradcheckError('Backward is not reentrant, i.e., running backward with '
torch.autograd.gradcheck.GradcheckError: Backward is not reentrant, i.e., running backward with same input and grad_output multiple times gives different values, although analytical gradient matches numerical gradient.The tolerance for nondeterminism was 0.0.

needs to be addressed.

test_variant_consistency_jit_nn_functional_grid_sample_cuda_float32 ca be skipped, I think. cc @eellison

RuntimeError: aliasOp != torch::jit::getOperatorAliasMap().end()INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/torch/csrc/jit/passes/utils/check_alias_annotation.cpp":159, please report a bug to PyTorch. 

cc @zou3519 -- would you shepherd this PR through to merge?

@facebook-github-bot
Copy link
Contributor

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@zou3519 merged this pull request in 7630f40.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: nn Related to torch.nn module: testing Issues related to the torch.testing module (not tests) open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants