forked from pytorch/pytorch
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Structured kernels/index add #14
Open
krshrimali
wants to merge
7
commits into
master
Choose a base branch
from
structured_kernels/index_add
base: master
Could not load branches
Branch not found: {{ refName }}
Could not load tags
Nothing to show
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
======================================================================
ERROR: test_variant_consistency_eager_index_add_cpu_float32 (__main__.TestCommonCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/krshrimali/Documents/Quansight/pytorch/torch/testing/_internal/common_device_type.py", line 373, in instantiated_test
raise rte
File "/home/krshrimali/Documents/Quansight/pytorch/torch/testing/_internal/common_device_type.py", line 368, in instantiated_test
result = test(self, **param_kwargs)
File "/home/krshrimali/Documents/Quansight/pytorch/torch/testing/_internal/common_device_type.py", line 734, in test_wrapper
return test(*args, **kwargs)
File "/home/krshrimali/Documents/Quansight/pytorch/test/test_ops.py", line 468, in test_variant_consistency_eager
_test_consistency_helper(samples, variants)
File "/home/krshrimali/Documents/Quansight/pytorch/test/test_ops.py", line 433, in _test_consistency_helper
output_process_fn_grad(expected_forward).sum().backward()
File "/home/krshrimali/Documents/Quansight/pytorch/torch/_tensor.py", line 320, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/krshrimali/Documents/Quansight/pytorch/torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: derivative for aten::index_add is not implemented
======================================================================
ERROR: test_fn_grad_index_add_cpu_complex128 (__main__.TestGradientsCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/krshrimali/Documents/Quansight/pytorch/torch/testing/_internal/common_device_type.py", line 373, in instantiated_test
raise rte
File "/home/krshrimali/Documents/Quansight/pytorch/torch/testing/_internal/common_device_type.py", line 368, in instantiated_test
result = test(self, **param_kwargs)
File "/home/krshrimali/Documents/Quansight/pytorch/torch/testing/_internal/common_device_type.py", line 734, in test_wrapper
return test(*args, **kwargs)
File "/home/krshrimali/Documents/Quansight/pytorch/test/test_ops.py", line 594, in test_fn_grad
self._grad_test_helper(device, dtype, op, op.get_op())
File "/home/krshrimali/Documents/Quansight/pytorch/test/test_ops.py", line 579, in _grad_test_helper
return self._check_helper(device, dtype, op, variant, 'gradcheck', check_forward_ad=check_forward_ad)
File "/home/krshrimali/Documents/Quansight/pytorch/test/test_ops.py", line 555, in _check_helper
self.assertTrue(gradcheck(fn, gradcheck_args,
File "/home/krshrimali/Documents/Quansight/pytorch/torch/testing/_internal/common_utils.py", line 2686, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/home/krshrimali/Documents/Quansight/pytorch/torch/autograd/gradcheck.py", line 1263, in gradcheck
return _gradcheck_helper(**args)
File "/home/krshrimali/Documents/Quansight/pytorch/torch/autograd/gradcheck.py", line 1276, in _gradcheck_helper
_gradcheck_real_imag(gradcheck_fn, func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/Documents/Quansight/pytorch/torch/autograd/gradcheck.py", line 937, in _gradcheck_real_imag
gradcheck_fn(imag_fn, imag_func_out, tupled_inputs, imag_outputs, eps,
File "/home/krshrimali/Documents/Quansight/pytorch/torch/autograd/gradcheck.py", line 1165, in _fast_gradcheck
analytical_vJu = _get_analytical_vJu_backward_mode(inputs, outputs, nondet_tol, check_grad_dtypes, all_v, all_u_dense)
File "/home/krshrimali/Documents/Quansight/pytorch/torch/autograd/gradcheck.py", line 555, in _get_analytical_vJu_backward_mode
all_vJ = _check_analytical_jacobian_attributes(inputs, output, nondet_tol, check_grad_dtypes,
File "/home/krshrimali/Documents/Quansight/pytorch/torch/autograd/gradcheck.py", line 528, in _check_analytical_jacobian_attributes
vjps1 = _get_analytical_vjps_wrt_specific_output(vjp_fn, output.clone(), v)
File "/home/krshrimali/Documents/Quansight/pytorch/torch/autograd/gradcheck.py", line 633, in _get_analytical_vjps_wrt_specific_output
grad_inputs = vjp_fn(v.reshape(sample_output.shape))
File "/home/krshrimali/Documents/Quansight/pytorch/torch/autograd/gradcheck.py", line 524, in vjp_fn
return torch.autograd.grad(output, diff_input_list, grad_output,
File "/home/krshrimali/Documents/Quansight/pytorch/torch/autograd/__init__.py", line 234, in grad
return Variable._execution_engine.run_backward(
RuntimeError: derivative for aten::index_add is not implemented
======================================================================
FAIL: test_out_index_add_cpu_float32 (__main__.TestCommonCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/krshrimali/Documents/Quansight/pytorch/torch/testing/_internal/common_device_type.py", line 368, in instantiated_test
result = test(self, **param_kwargs)
File "/home/krshrimali/Documents/Quansight/pytorch/torch/testing/_internal/common_device_type.py", line 734, in test_wrapper
return test(*args, **kwargs)
File "/home/krshrimali/Documents/Quansight/pytorch/test/test_ops.py", line 253, in test_out
self.assertEqual(expected, out)
File "/home/krshrimali/Documents/Quansight/pytorch/torch/testing/_internal/common_utils.py", line 1875, in assertEqual
super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
AssertionError: False is not true : Tensors failed to compare as equal!With rtol=1.3e-06 and atol=1e-05, found 25 element(s) (out of 25) whose difference(s) exceeded the margin of error (including 25 nan comparisons). The greatest difference was nan (3.817853033490371e+35 vs. nan), which occurred at index (0, 0). |
All tests pass...! 🎉 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR:
out=
variantcc: @ysiraichi