Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix structured kernel codegen #49244

Closed
wants to merge 1 commit into from

Conversation

smessmer
Copy link
Contributor

@smessmer smessmer commented Dec 11, 2020

Stack from ghstack:

see https://fb.quip.com/ceEdANd5iVsO

RegisterMkldnnCPU kernels incorrectly used makeUnboxedOnly() calls to register add_.Tensor kernels. This is because the codegen incorrectly thought they're not c10-full.
This PR fixes that.

Differential Revision: D25500246

see https://fb.quip.com/ceEdANd5iVsO

RegisterMkldnnCPU kernels incorrectly used makeUnboxedOnly() calls to register add_.Tensor kernels. This is because the codegen incorrectly thought they're not c10-full.
This PR fixes that.

Differential Revision: [D25500246](https://our.internmc.facebook.com/intern/diff/D25500246/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Dec 11, 2020

💊 CI failures summary and remediations

As of commit 7790021 (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_bionic_py3_6_clang9_test (1/2)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Dec 11 20:25:33 AssertionError: mypy failed: tools/codegen/gen.py:388:9: error: Missing return statement
Dec 11 20:25:12   test_run_mypy (__main__.TestTypeHints) ... ok (57.181s)
Dec 11 20:25:15   test_run_mypy_strict (__main__.TestTypeHints) ... FAIL (2.631s)
Dec 11 20:25:33   test_type_hint_examples (__main__.TestTypeHints) ... ok (18.650s)
Dec 11 20:25:33 
Dec 11 20:25:33 ======================================================================
Dec 11 20:25:33 FAIL [2.631s]: test_run_mypy_strict (__main__.TestTypeHints)
Dec 11 20:25:33 ----------------------------------------------------------------------
Dec 11 20:25:33 Traceback (most recent call last):
Dec 11 20:25:33   File "test_type_hints.py", line 239, in test_run_mypy_strict
Dec 11 20:25:33     self.fail(f"mypy failed: {stdout} {stderr}")
Dec 11 20:25:33 AssertionError: mypy failed: tools/codegen/gen.py:388:9: error: Missing return statement
Dec 11 20:25:33 Found 1 error in 1 file (checked 11 source files)
Dec 11 20:25:33  
Dec 11 20:25:33 
Dec 11 20:25:33 ----------------------------------------------------------------------
Dec 11 20:25:33 Ran 4 tests in 88.745s
Dec 11 20:25:33 
Dec 11 20:25:33 FAILED (failures=1)
Dec 11 20:25:33 
Dec 11 20:25:33 Generating XML reports...
Dec 11 20:25:33 Generated XML report: test-reports/dist-gloo/TEST-TestTypeHints-20201211202405.xml

See CircleCI build pytorch_linux_backward_compatibility_check_test (2/2)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Dec 11 19:19:49 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
Dec 11 19:19:49 processing existing schema:  gather(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, Tensor _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 11 19:19:49 processing existing schema:  scatter(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor _1, Tensor[] _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 11 19:19:49 processing existing schema:  reduce_scatter(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor _1, Tensor[] _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 11 19:19:49 processing existing schema:  alltoall_base(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor _1, Tensor _2, int[] _3, int[] _4) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 11 19:19:49 processing existing schema:  alltoall(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, Tensor[] _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 11 19:19:49 processing existing schema:  send(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 11 19:19:49 processing existing schema:  recv(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 11 19:19:49 processing existing schema:  recv_anysource(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 11 19:19:49 processing existing schema:  barrier(__torch__.torch.classes.dist_c10d.ProcessGroup _0) -> (__torch__.torch.classes.dist_c10d.Work _0)
Dec 11 19:19:49 processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (None _0)
Dec 11 19:19:49 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
Dec 11 19:19:49 
Dec 11 19:19:49 Broken ops: [
Dec 11 19:19:49 	aten::nanquantile.scalar_out(Tensor self, float q, int? dim=None, str interpolation="linear", bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!))
Dec 11 19:19:49 	aten::nanquantile.scalar(Tensor self, float q, int? dim=None, str interpolation="linear", bool keepdim=False) -> (Tensor)
Dec 11 19:19:49 	aten::nanquantile.out(Tensor self, Tensor q, int? dim=None, str interpolation="linear", bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!))
Dec 11 19:19:49 	aten::nanquantile(Tensor self, Tensor q, int? dim=None, str interpolation="linear", bool keepdim=False) -> (Tensor)
Dec 11 19:19:49 	aten::quantile.scalar_out(Tensor self, float q, int? dim=None, str interpolation="linear", bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!))
Dec 11 19:19:49 	aten::quantile.scalar(Tensor self, float q, int? dim=None, str interpolation="linear", bool keepdim=False) -> (Tensor)
Dec 11 19:19:49 	aten::quantile.out(Tensor self, Tensor q, int? dim=None, str interpolation="linear", bool keepdim=False, *, Tensor(a!) out) -> (Tensor(a!))
Dec 11 19:19:49 	aten::quantile(Tensor self, Tensor q, int? dim=None, str interpolation="linear", bool keepdim=False) -> (Tensor)

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 5 times.

smessmer added a commit that referenced this pull request Dec 11, 2020
see https://fb.quip.com/ceEdANd5iVsO

RegisterMkldnnCPU kernels incorrectly used makeUnboxedOnly() calls to register add_.Tensor kernels. This is because the codegen incorrectly thought they're not c10-full.
This PR fixes that.

Differential Revision: [D25500246](https://our.internmc.facebook.com/intern/diff/D25500246/)

ghstack-source-id: 118411117
Pull Request resolved: #49244
@bhosmer
Copy link
Contributor

bhosmer commented Dec 11, 2020

@ezyang calling back to thread about explicit plumbing vs. dynamic scoping like I imagine you figured I would

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants