Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ports real and imag fixes to 1.5 release branch #35657

Merged
merged 1 commit into from
Mar 31, 2020

Conversation

mruberry
Copy link
Collaborator

Ports #35560 to the 1.5 release branch.

@mruberry mruberry requested a review from gchanan March 30, 2020 05:15
@dr-ci
Copy link

dr-ci bot commented Mar 30, 2020

💊 CircleCI build failures summary and remediations

As of commit 093edcc (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following build failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/1)

Step: "Test" (full log | pattern match details)

Mar 30 07:04:53 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
Mar 30 07:04:53 processing existing schema:  aten::sparse_coo_tensor.size(int[] size, *, int dtype, int layout, Device device, bool pin_memory=False) -> (Tensor) 
Mar 30 07:04:53 processing existing schema:  aten::sparse_coo_tensor.indices(Tensor indices, Tensor values, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 
Mar 30 07:04:53 processing existing schema:  aten::sparse_coo_tensor.indices_size(Tensor indices, Tensor values, int[] size, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None) -> (Tensor) 
Mar 30 07:04:53 processing existing schema:  aten::split_with_sizes(Tensor self, int[] split_sizes, int dim=0) -> (Tensor[]) 
Mar 30 07:04:53 processing existing schema:  aten::squeeze(Tensor(a) self) -> (Tensor(a)) 
Mar 30 07:04:53 processing existing schema:  aten::squeeze.dim(Tensor(a) self, int dim) -> (Tensor(a)) 
Mar 30 07:04:53 processing existing schema:  aten::stft(Tensor self, int n_fft, int? hop_length=None, int? win_length=None, Tensor? window=None, bool normalized=False, bool onesided=True) -> (Tensor) 
Mar 30 07:04:53 skipping schema:  aten::sub_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!)) 
Mar 30 07:04:53 skipping schema:  aten::sub_.Scalar(Tensor(a!) self, Scalar other, Scalar alpha=1) -> (Tensor(a!)) 
Mar 30 07:04:53 processing existing schema:  aten::t(Tensor(a) self) -> (Tensor(a)) 
Mar 30 07:04:53 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.  
Mar 30 07:04:53  
Mar 30 07:04:53 Broken ops: [ 
Mar 30 07:04:53 	aten::owner(RRef(t) self) -> (__torch__.torch.classes.dist_rpc.WorkerInfo) 
Mar 30 07:04:53 	aten::local_value(RRef(t) self) -> (t) 
Mar 30 07:04:53 	prim::id(AnyClassType? x) -> (int) 
Mar 30 07:04:53 	__caffe2::RoIAlignRotated(Tensor features, Tensor rois, str order, float spatial_scale, int pooled_h, int pooled_w, int sampling_ratio, bool aligned, Tensor[]? _caffe2_preallocated_outputs=None) -> (Tensor) 
Mar 30 07:04:53 	__caffe2::RoIAlign(Tensor features, Tensor rois, str order, float spatial_scale, int pooled_h, int pooled_w, int sampling_ratio, bool aligned, Tensor[]? _caffe2_preallocated_outputs=None) -> (Tensor) 
Mar 30 07:04:53 	__caffe2::HeatmapMaxKeypoint(Tensor heatmaps, Tensor bboxes_in, bool should_output_softmax=True, Tensor[]? _caffe2_preallocated_outputs=None) -> (Tensor keypoints) 
Mar 30 07:04:53 	__caffe2::GenerateProposals(Tensor scores, Tensor bbox_deltas, Tensor im_info, Tensor anchors, float spatial_scale, int pre_nms_topN, int post_nms_topN, float nms_thresh, float min_size, bool angle_bound_on, int angle_bound_lo, int angle_bound_hi, float clip_angle_thresh, bool legacy_plus_one, Tensor[]? _caffe2_preallocated_outputs=None) -> (Tensor output_0, Tensor output_1) 
:BoxWithNMSLimit(Tensor scores, Tensor boxes, Tensor batch_splits, float score_thresh, float nms, int detections_per_im, bool soft_nms_enabled, str soft_nms_method, float soft_nms_sigma, float soft_nms_min_score_thres, bool rotated, bool cls_agnostic_bbox_reg, bool input_boxes_include_bg_cls, bool output_classes_include_bg_cls, bool legacy_plus_one, Tensor[]? _caffe2_preallocated_outputs=None) -> (Tensor scores, Tensor boxes, Tensor classes, Tensor batch_splits, Tensor keeps, Tensor keeps_size) 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 2 times.

@gchanan gchanan merged commit 84f806c into release/1.5 Mar 31, 2020
@mruberry mruberry deleted the 1.5_port_real_imag_numpy_compat branch May 16, 2020 04:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants