Skip to content

Conversation

jeffdaily
Copy link
Collaborator

This change adds the arch settings for caffe2 builds, fixes some typos,
and clarifies that this setting applies to both CircleCI and Jenkins.

This change adds the arch settings for caffe2 builds, fixes some typos,
and clarifies that this setting applies to both CircleCI and Jenkins.
@jeffdaily jeffdaily added the module: rocm AMD GPU support for Pytorch label Nov 10, 2020
@jeffdaily jeffdaily requested a review from malfet November 10, 2020 17:52
@jeffdaily
Copy link
Collaborator Author

Corresponding to pytorch/ossci-job-dsl#83, this will reduce build times for ROCm CI (Jenkins) jobs.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@dr-ci
Copy link

dr-ci bot commented Nov 10, 2020

💊 CI failures summary and remediations

As of commit c44eed6 (more details on the Dr. CI page):


  • 3/3 failures possibly* introduced in this PR
    • 1/3 non-CircleCI failure(s)

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Nov 10 18:32:48 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
Nov 10 18:32:48 processing existing schema:  __getstate__(__torch__.torch.classes.xnnpack.LinearOpContext _0) -> ((Tensor, Tensor?, Scalar?, Scalar?) _0) 
Nov 10 18:32:48 processing existing schema:  __setstate__(__torch__.torch.classes.xnnpack.LinearOpContext _0, (Tensor, Tensor?, Scalar?, Scalar?) _1) -> (None _0) 
Nov 10 18:32:48 processing existing schema:  __getstate__(__torch__.torch.classes.xnnpack.Conv2dOpContext _0) -> ((Tensor, Tensor?, int[], int[], int[], int, Scalar?, Scalar?) _0) 
Nov 10 18:32:48 processing existing schema:  __setstate__(__torch__.torch.classes.xnnpack.Conv2dOpContext _0, (Tensor, Tensor?, int[], int[], int[], int, Scalar?, Scalar?) _1) -> (None _0) 
Nov 10 18:32:48 processing existing schema:  __getstate__(__torch__.torch.classes.xnnpack.TransposeConv2dOpContext _0) -> ((Tensor, Tensor?, int[], int[], int[], int[], int, Scalar?, Scalar?) _0) 
Nov 10 18:32:48 processing existing schema:  __setstate__(__torch__.torch.classes.xnnpack.TransposeConv2dOpContext _0, (Tensor, Tensor?, int[], int[], int[], int[], int, Scalar?, Scalar?) _1) -> (None _0) 
Nov 10 18:32:48 processing existing schema:  __init__(__torch__.torch.classes._nnapi.Compilation _0) -> (None _0) 
Nov 10 18:32:48 processing existing schema:  init(__torch__.torch.classes._nnapi.Compilation _0, Tensor _1, Tensor[] _2) -> (None _0) 
Nov 10 18:32:48 processing existing schema:  run(__torch__.torch.classes._nnapi.Compilation _0, Tensor[] _1, Tensor[] _2) -> (None _0) 
Nov 10 18:32:48 processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (None _0) 
Nov 10 18:32:48 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.  
Nov 10 18:32:48  
Nov 10 18:32:48 Broken ops: [ 
Nov 10 18:32:48 	aten::_foreach_log(Tensor[] tensors) -> (Tensor[]) 
Nov 10 18:32:48 	aten::_foreach_round(Tensor[] tensors) -> (Tensor[]) 
Nov 10 18:32:48 	aten::_foreach_sinh(Tensor[] tensors) -> (Tensor[]) 
Nov 10 18:32:48 	aten::_foreach_lgamma_(Tensor[] self) -> () 
Nov 10 18:32:48 	aten::_foreach_lgamma(Tensor[] tensors) -> (Tensor[]) 
Nov 10 18:32:48 	aten::_foreach_log10(Tensor[] tensors) -> (Tensor[]) 
Nov 10 18:32:48 	aten::_foreach_round_(Tensor[] self) -> () 
Nov 10 18:32:48 	aten::_foreach_sin(Tensor[] tensors) -> (Tensor[]) 

1 job timed out:

  • pytorch_linux_xenial_py3_clang5_asan_test2

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 3 times.

@jeffdaily
Copy link
Collaborator Author

CI failures not due to this PR.

@facebook-github-bot
Copy link
Contributor

@malfet merged this pull request in 7691cf1.

1 similar comment
@facebook-github-bot
Copy link
Contributor

@malfet merged this pull request in 7691cf1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants