Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[1.5 Release] Disabled complex tensor construction #35579

Merged
merged 7 commits into from
Apr 1, 2020

Conversation

anjali411
Copy link
Contributor

@anjali411 anjali411 commented Mar 27, 2020

  1. disabled complex tensor construction
  2. removed complex tests

@dr-ci
Copy link

dr-ci bot commented Mar 27, 2020

💊 CircleCI build failures summary and remediations

As of commit 3312781 (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following build failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/1)

Step: "Test" (full log | pattern match details) <confirmed not flaky by 2 failures>

Mar 31 22:53:57 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
Mar 31 22:53:57 processing existing schema:  aten::sparse_coo_tensor.size(int[] size, *, int dtype, int layout, Device device, bool pin_memory=False) -> (Tensor) 
Mar 31 22:53:57 processing existing schema:  aten::sparse_coo_tensor.indices(Tensor indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 
Mar 31 22:53:57 processing existing schema:  aten::sparse_coo_tensor.indices_size(Tensor indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 
Mar 31 22:53:57 processing existing schema:  aten::split_with_sizes(Tensor self, int[] split_sizes, int dim=0) -> (Tensor[]) 
Mar 31 22:53:57 processing existing schema:  aten::squeeze(Tensor(a) self) -> (Tensor(a)) 
Mar 31 22:53:57 processing existing schema:  aten::squeeze.dim(Tensor(a) self, int dim) -> (Tensor(a)) 
Mar 31 22:53:57 processing existing schema:  aten::stft(Tensor self, int n_fft, int? hop_length=None, int? win_length=None, Tensor? window=None, bool normalized=False, bool onesided=True) -> (Tensor) 
Mar 31 22:53:57 skipping schema:  aten::sub_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!)) 
Mar 31 22:53:57 skipping schema:  aten::sub_.Scalar(Tensor(a!) self, Scalar other, Scalar alpha=1) -> (Tensor(a!)) 
Mar 31 22:53:57 processing existing schema:  aten::t(Tensor(a) self) -> (Tensor(a)) 
Mar 31 22:53:57 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.  
Mar 31 22:53:57  
Mar 31 22:53:57 Broken ops: [ 
Mar 31 22:53:57 	aten::local_value(RRef(t) self) -> (t) 
Mar 31 22:53:57 	_aten::full(int[] size, Scalar fill_value, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 
Mar 31 22:53:57 	_aten::dequantize(Tensor self) -> (Tensor) 
Mar 31 22:53:57 	_aten::quantize_per_tensor(Tensor self, float scale, int zero_point, int dtype) -> (Tensor) 
Mar 31 22:53:57 	_aten::div.Tensor(Tensor self, Tensor other) -> (Tensor) 
Mar 31 22:53:57 	_aten::detach(Tensor self) -> (Tensor) 
Mar 31 22:53:57 	prim::id(AnyClassType? x) -> (int) 
Mar 31 22:53:57 	aten::owner(RRef(t) self) -> (__torch__.torch.classes.dist_rpc.WorkerInfo) 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 60 times.

torch/_tensor_docs.py Outdated Show resolved Hide resolved
torch/_torch_docs.py Outdated Show resolved Hide resolved
@anjali411 anjali411 force-pushed the release/1.5 branch 2 times, most recently from a9a85c9 to 18b7503 Compare March 30, 2020 18:09
@ezyang ezyang removed their request for review March 31, 2020 21:27
@gchanan gchanan merged commit df5986f into pytorch:release/1.5 Apr 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants