Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v1.5.0] Fix another case of float2::x and float2::y may not be the same on ROCm" #35786

Closed
wants to merge 1 commit into from

Conversation

gchanan
Copy link
Contributor

@gchanan gchanan commented Apr 1, 2020

This is another case of the issue fixed in #35783,
but it triggers reliably on the 1.5.0 branch.

This is another case of the issue fixed in pytorch#35783,
but it triggers reliably on the 1.5.0 branch.
@dr-ci
Copy link

dr-ci bot commented Apr 1, 2020

💊 CircleCI build failures summary and remediations

As of commit 95e4c6a (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following build failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/1)

Step: "Test" (full log | pattern match details) <confirmed not flaky by 2 failures>

Apr 01 00:36:04 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
Apr 01 00:36:04 processing existing schema:  aten::sparse_coo_tensor.size(int[] size, *, int dtype, int layout, Device device, bool pin_memory=False) -> (Tensor) 
Apr 01 00:36:04 processing existing schema:  aten::sparse_coo_tensor.indices(Tensor indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 
Apr 01 00:36:04 processing existing schema:  aten::sparse_coo_tensor.indices_size(Tensor indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 
Apr 01 00:36:04 processing existing schema:  aten::split_with_sizes(Tensor self, int[] split_sizes, int dim=0) -> (Tensor[]) 
Apr 01 00:36:04 processing existing schema:  aten::squeeze(Tensor(a) self) -> (Tensor(a)) 
Apr 01 00:36:04 processing existing schema:  aten::squeeze.dim(Tensor(a) self, int dim) -> (Tensor(a)) 
Apr 01 00:36:04 processing existing schema:  aten::stft(Tensor self, int n_fft, int? hop_length=None, int? win_length=None, Tensor? window=None, bool normalized=False, bool onesided=True) -> (Tensor) 
Apr 01 00:36:04 skipping schema:  aten::sub_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!)) 
Apr 01 00:36:04 skipping schema:  aten::sub_.Scalar(Tensor(a!) self, Scalar other, Scalar alpha=1) -> (Tensor(a!)) 
Apr 01 00:36:04 processing existing schema:  aten::t(Tensor(a) self) -> (Tensor(a)) 
Apr 01 00:36:04 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.  
Apr 01 00:36:04  
Apr 01 00:36:04 Broken ops: [ 
Apr 01 00:36:04 	aten::local_value(RRef(t) self) -> (t) 
Apr 01 00:36:04 	_aten::full(int[] size, Scalar fill_value, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 
Apr 01 00:36:04 	_aten::dequantize(Tensor self) -> (Tensor) 
Apr 01 00:36:04 	_aten::quantize_per_tensor(Tensor self, float scale, int zero_point, int dtype) -> (Tensor) 
Apr 01 00:36:04 	_aten::div.Tensor(Tensor self, Tensor other) -> (Tensor) 
Apr 01 00:36:04 	_aten::detach(Tensor self) -> (Tensor) 
Apr 01 00:36:04 	prim::id(AnyClassType? x) -> (int) 
Apr 01 00:36:04 	aten::owner(RRef(t) self) -> (__torch__.torch.classes.dist_rpc.WorkerInfo) 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 2 times.

@gchanan gchanan closed this Apr 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants