Skip to content

Conversation

kurtamohler
Copy link
Collaborator

Fixes #53605

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Mar 24, 2021

💊 CI failures summary and remediations

As of commit a8996ec (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

@kurtamohler kurtamohler force-pushed the index-complex-autograd branch from d605e4d to 34fa046 Compare March 24, 2021 17:37
supports_inplace_autograd=True,
sample_inputs_func=sample_inputs_index_put,
skips=(
SkipInfo('TestCommon', 'test_variant_consistency_jit'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this test being skipped? if it's intentionally skipped due to a test failure, could you share the error message or the reason why it's failing?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah there's a failure that I don't understand yet:

click to expand
======================================================================
ERROR: test_variant_consistency_jit_index_put_cpu_float32 (__main__.TestCommonCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/common_device_type.py", line 297, in instantiated_test
    raise rte
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/common_device_type.py", line 292, in instantiated_test
    result = test_fn(self, *args)
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/common_device_type.py", line 266, in test_wrapper
    return test(*args, **kwargs)
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/test/test_ops.py", line 294, in test_variant_consistency_jit
    check_against_reference(self,
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/common_jit.py", line 63, in check_against_reference
    outputs_test = self.runAndSaveRNG(func, nograd_inputs, kwargs)
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/common_jit.py", line 130, in runAndSaveRNG
    results = func(*inputs, **kwargs)
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/jit_metaprogramming_utils.py", line 301, in script_fn
    fn, tensors = gen_script_fn_and_args(method_name, func_type, *args, **kwargs)
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/jit_metaprogramming_utils.py", line 292, in gen_script_fn_and_args
    CU = torch.jit.CompilationUnit(script)
RuntimeError: 
undefined value tensor:
  File "<string>", line 3

def the_method(i0, i1):
    return torch.index_put(i0, (tensor([4, 4]),), i1, accumulate=False)

Seems like it must be an existing issue that this PR did not introduce, but I could look into it anyway

supports_inplace_autograd=False,
op=torch.Tensor.__getitem__,
sample_inputs_func=sample_inputs_getitem,
skips=(SkipInfo('TestCommon', 'test_variant_consistency_jit'),)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as below

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the failure for this one:

click to expand
======================================================================
ERROR: test_variant_consistency_jit_index_put_cuda_float32 (__main__.TestCommonCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/common_utils.py", line 955, in wrapper
    method(*args, **kwargs)
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/common_utils.py", line 955, in wrapper
    method(*args, **kwargs)
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/common_device_type.py", line 297, in instantiated_test
    raise rte
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/common_device_type.py", line 292, in instantiated_test
    result = test_fn(self, *args)
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/common_device_type.py", line 266, in test_wrapper
    return test(*args, **kwargs)
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/test/test_ops.py", line 294, in test_variant_consistency_jit
    check_against_reference(self,
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/common_jit.py", line 63, in check_against_reference
    outputs_test = self.runAndSaveRNG(func, nograd_inputs, kwargs)
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/common_jit.py", line 130, in runAndSaveRNG
    results = func(*inputs, **kwargs)
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/jit_metaprogramming_utils.py", line 301, in script_fn
    fn, tensors = gen_script_fn_and_args(method_name, func_type, *args, **kwargs)
  File "/work2/kurtamohler/development/pytorch-index_put-complex-autograd/torch/testing/_internal/jit_metaprogramming_utils.py", line 292, in gen_script_fn_and_args
    CU = torch.jit.CompilationUnit(script)
RuntimeError: 
undefined value tensor:
  File "<string>", line 3

def the_method(i0, i1):
    return torch.index_put(i0, (tensor([0, 1], device='cuda:0'),), i1, accumulate=False)
                                ~~~~~~ <--- HERE

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no idea how to fix this off the top of my head. let's follow up on this in a separate issue/subsequent PR?

Copy link
Contributor

@anjali411 anjali411 Mar 29, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kurtamohler did you get a chance to look into this?

Copy link
Contributor

@anjali411 anjali411 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @kurtamohler ! This PR lgtm overall. I just have one question regarding the skip, but besides that this PR is ready to be merged.

@facebook-github-bot
Copy link
Contributor

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@anjali411
Copy link
Contributor

@kurtamohler could you please rebase?

@codecov
Copy link

codecov bot commented Mar 26, 2021

Codecov Report

Merging #54562 (0f27544) into master (4e5af53) will decrease coverage by 0.00%.
The diff coverage is 100.00%.

❗ Current head 0f27544 differs from pull request most recent head a8996ec. Consider uploading reports for the commit a8996ec to get more accurate results

@@            Coverage Diff             @@
##           master   #54562      +/-   ##
==========================================
- Coverage   77.45%   77.45%   -0.01%     
==========================================
  Files        1894     1893       -1     
  Lines      186401   186105     -296     
==========================================
- Hits       144377   144143     -234     
+ Misses      42024    41962      -62     

@facebook-github-bot
Copy link
Contributor

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@anjali411
Copy link
Contributor

@kurtamohler looks like another PR went in before this one. needs another rebase!

@kurtamohler
Copy link
Collaborator Author

@anjali411, rebase done

@facebook-github-bot
Copy link
Contributor

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in 49b07ac.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add complex autograd support for torch.Tensor.index

4 participants