Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add dict() constructor #51934

Closed
wants to merge 4 commits into from
Closed

Add dict() constructor #51934

wants to merge 4 commits into from

Conversation

ansley
Copy link

@ansley ansley commented Feb 9, 2021

Stack from ghstack:

Differential Revision: D26418199

[ghstack-poisoned]
ansley pushed a commit that referenced this pull request Feb 9, 2021
ghstack-source-id: 4d210ba5c7e8a014406d8807af4bf8db621517bc
Pull Request resolved: #51934
@ansley ansley mentioned this pull request Feb 9, 2021
@ansley ansley requested a review from gmagogsfm February 9, 2021 03:23
@facebook-github-bot facebook-github-bot added cla signed oncall: jit Add this issue/PR to JIT oncall triage queue labels Feb 9, 2021
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Feb 9, 2021

💊 CI failures summary and remediations

As of commit 2d5d0f0 (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Feb 12 06:27:16 AssertionError: False is not true
Feb 12 06:27:16   test_where_scalar_valid_combination_xla_uint8 (__main__.TestTorchDeviceTypeXLA) ... ok (0.035s)
Feb 12 06:27:16 
Feb 12 06:27:16 ======================================================================
Feb 12 06:27:16 FAIL [0.003s]: test_pickle_gradscaler_xla (__main__.TestTorchDeviceTypeXLA)
Feb 12 06:27:16 ----------------------------------------------------------------------
Feb 12 06:27:16 Traceback (most recent call last):
Feb 12 06:27:16   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 290, in instantiated_test
Feb 12 06:27:16     result = test_fn(self, *args)
Feb 12 06:27:16   File "/var/lib/jenkins/workspace/xla/test/../../test/test_torch.py", line 6203, in test_pickle_gradscaler
Feb 12 06:27:16     self.assertTrue(a.is_enabled() if torch.cuda.is_available() else not a.is_enabled())
Feb 12 06:27:16 AssertionError: False is not true
Feb 12 06:27:16 
Feb 12 06:27:16 ----------------------------------------------------------------------
Feb 12 06:27:16 Ran 269 tests in 154.263s
Feb 12 06:27:16 
Feb 12 06:27:16 FAILED (failures=1, skipped=147)
Feb 12 06:27:16 
Feb 12 06:27:16 Generating XML reports...
Feb 12 06:27:16 Generated XML report: test-reports/python-unittest/TEST-TestTorchDeviceTypeXLA-20210212062441.xml
Feb 12 06:27:16 + cleanup
Feb 12 06:27:16 + retcode=1

XLA failure

Job pytorch_xla_linux_bionic_py3_6_clang9_test is failing. Please create an issue with title prefixed by [PT_BREAK] in pytorch/xla and link to to this PR. If you have questions, please reach out to @ailzhang / @dlibenzi / @JackCaoG.


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Copy link
Contributor

@eellison eellison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great 🚀 🚀 🚀

}
default:
TORCH_INTERNAL_ASSERT(false, "unknown special form: ", form);
if (!apply.inputs().empty() && apply.inputs()[0].kind() == TK_DICT_LITERAL) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, we should get rid of the special empty handling - we're not handling dicts/mappings in general, and dict([]) will work without the special case.

Comment on lines 3171 to 3178
auto name = StringLiteral::create(kwarg.range(), kwarg.name().name());
auto k = emitExpr(name);
auto v = emitExpr(kwarg.value());
NamedValue input_k = NamedValue(kwarg.range(), "", k);
NamedValue input_v = NamedValue(kwarg.range(), "", v);
emitBuiltinCall(
kwarg.range(), *graph, aten::_set_item, {self, input_k, input_v}, {});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic is very similar to 3076-3083, you might be able to factor it out to a helper lambda or function

ansley pushed a commit that referenced this pull request Feb 12, 2021
ghstack-source-id: 0b55245f9cb6e71e13f3bd9b4614eff7a4b89b09
Pull Request resolved: #51934
@facebook-github-bot
Copy link
Contributor

@ansley merged this pull request in 96fd5d8.

@facebook-github-bot facebook-github-bot deleted the gh/ansley/9/head branch February 17, 2021 15:16
xsacha pushed a commit to xsacha/pytorch that referenced this pull request Mar 31, 2021
Summary: Pull Request resolved: pytorch#51934

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26418199

Pulled By: ansley

fbshipit-source-id: 524f6d9d29ee1fa1b7c5e80ada82e577f47089dc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged oncall: jit Add this issue/PR to JIT oncall triage queue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants