Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for legacy tensor constructors in JIT #74785

Closed
wants to merge 5 commits into from

Conversation

eellison
Copy link
Contributor

@eellison eellison commented Mar 25, 2022

Stack from ghstack:

Fix for https://github.com/facebookresearch/torchdynamo/issues/93

Because the constructor follow a non-standard input schema (variadic integers), they are handled specially in ir_emitter.

Differential Revision: D35362762

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Mar 25, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 7569cc3 (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build pull / linux-xenial-py3.7-clang7-asan / test (default, 2, 3, linux.2xlarge) (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-05T20:11:15.7835310Z SUMMARY: Undefined.../jenkins/workspace/aten/src/ATen/Utils.cpp:20:3 in
2022-04-05T20:11:15.7322930Z     pytorch/torchdynamo#10 0x5586b2148c81 in run_mod /home/builder/tkoch/workspace/python_1648536129212/work/Python/pythonrun.c:1037
2022-04-05T20:11:15.7323650Z     pytorch/torchdynamo#11 0x5586b2153c69 in PyRun_StringFlags /home/builder/tkoch/workspace/python_1648536129212/work/Python/pythonrun.c:961
2022-04-05T20:11:15.7324759Z     pytorch/torchdynamo#12 0x5586b2153ccb in PyRun_SimpleStringFlags /home/builder/tkoch/workspace/python_1648536129212/work/Python/pythonrun.c:455
2022-04-05T20:11:15.7325683Z     pytorch/torchdynamo#13 0x5586b2153dc8 in pymain_run_command /home/builder/tkoch/workspace/python_1648536129212/work/Modules/main.c:420
2022-04-05T20:11:15.7326458Z     pytorch/torchdynamo#14 0x5586b2153dc8 in pymain_run_python /home/builder/tkoch/workspace/python_1648536129212/work/Modules/main.c:2907
2022-04-05T20:11:15.7326953Z     pytorch/torchdynamo#15 0x5586b2153dc8 in pymain_main /home/builder/tkoch/workspace/python_1648536129212/work/Modules/main.c:3460
2022-04-05T20:11:15.7328064Z     pytorch/torchdynamo#16 0x5586b215418b in _Py_UnixMain /home/builder/tkoch/workspace/python_1648536129212/work/Modules/main.c:3495
2022-04-05T20:11:15.7834391Z     pytorch/torchdynamo#17 0x7f94104d383f in __libc_start_main /build/glibc-S7Ft5T/glibc-2.23/csu/../csu/libc-start.c:291
2022-04-05T20:11:15.7834828Z     pytorch/torchdynamo#18 0x5586b20f9039 in _start (/opt/conda/bin/python3.7+0x1d8039)
2022-04-05T20:11:15.7835005Z 
2022-04-05T20:11:15.7835310Z SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /var/lib/jenkins/workspace/aten/src/ATen/Utils.cpp:20:3 in 
2022-04-05T20:11:15.8071385Z + retcode=1
2022-04-05T20:11:15.8071682Z + set -e
2022-04-05T20:11:15.8071846Z + return 1
2022-04-05T20:11:15.8074906Z + [[ linux-xenial-py3.7-clang7-asan-default == *-NO_AVX-* ]]
2022-04-05T20:11:15.8075396Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X ]]
2022-04-05T20:11:15.8076130Z + [[ linux-xenial-py3.7-clang7-asan-default == *-NO_AVX2-* ]]
2022-04-05T20:11:15.8076616Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]]
2022-04-05T20:11:15.8077217Z + [[ linux-xenial-py3.7-clang7-asan-default == *-NO_AVX512-* ]]
2022-04-05T20:11:15.8077716Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X\5\1\2 ]]
2022-04-05T20:11:15.8079794Z + [[ linux-xenial-py3.7-clang7-asan-default == *tbb* ]]

🕵️‍♀️ 1 failure not recognized by patterns:

The following CI failures may be due to changes from the PR
Job Step Action
GitHub Actions pull / linux-bionic-rocm5.0-py3.7 / test (default, 2, 2, linux.rocm.gpu) Set up job 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Mar 25, 2022
eellison pushed a commit that referenced this pull request Mar 25, 2022
ghstack-source-id: f60220bb5e31e49b89da74ded055c302b02e79c6
Pull Request resolved: #74785
@eellison eellison changed the title Add support for legacy constructors in JIT Add support for legacy tensor constructors in JIT Mar 25, 2022
@eellison eellison requested a review from ansley March 25, 2022 23:03
Fix for https://github.com/facebookresearch/torchdynamo/issues/93

Because the constructor follow a non-standard input schema (variadic integers), they are handled specially in ir_emitter. 



[ghstack-poisoned]
Fix for https://github.com/facebookresearch/torchdynamo/issues/93

Because the constructor follow a non-standard input schema (variadic integers), they are handled specially in ir_emitter. 



[ghstack-poisoned]
eellison pushed a commit that referenced this pull request Mar 28, 2022
ghstack-source-id: 853fde07fd30e0aaa9eb723f5418df171c2fc0e5
Pull Request resolved: #74785
Copy link

@ansley ansley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed offline

for (const auto& name : tensor_names) {
if (obj.ptr() == py::module::import("torch").attr(name.first).ptr()) {
return LegacyTensorConstructor::create(
prim::LegacyTypedConstructor, name.second, at::kCPU);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is my own lack of knowledge showing, but why do we assume that the Tensor will be on the CPU?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should add a comment or just remove the argument. The legacy constructors always are instigated on CPU. There is separately from torch.LongTensor (cpu) torch.cuda.LongTensor (cuda). I added the device argument without actually adding support for torch.cuda.LongTensor so I can just remove it for now

Fix for https://github.com/facebookresearch/torchdynamo/issues/93

Because the constructor follow a non-standard input schema (variadic integers), they are handled specially in ir_emitter. 



[ghstack-poisoned]
eellison pushed a commit that referenced this pull request Mar 31, 2022
ghstack-source-id: 7cb4972b803dd7cdb87589490b18bfc68e4d6a71
Pull Request resolved: #74785
@eellison
Copy link
Contributor Author

eellison commented Apr 4, 2022

@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Fix for https://github.com/facebookresearch/torchdynamo/issues/93

Because the constructor follow a non-standard input schema (variadic integers), they are handled specially in ir_emitter.

Differential Revision: [D35362762](https://our.internmc.facebook.com/intern/diff/D35362762)

[ghstack-poisoned]
eellison pushed a commit that referenced this pull request Apr 5, 2022
ghstack-source-id: 769baf5c5208cd8010f4c6b5c5ddba6754823daf
Pull Request resolved: #74785
@eellison
Copy link
Contributor Author

eellison commented Apr 5, 2022

@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

facebook-github-bot pushed a commit that referenced this pull request Apr 6, 2022
Summary:
Pull Request resolved: #74785

Fix for https://github.com/facebookresearch/torchdynamo/issues/93

Because the constructor follow a non-standard input schema (variadic integers), they are handled specially in ir_emitter.

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D35362762

Pulled By: eellison

fbshipit-source-id: 960badf08ba2ab0818af5fd331aff3542051250f
@github-actions
Copy link

github-actions bot commented Apr 6, 2022

Hey @eellison.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed oncall: jit Add this issue/PR to JIT oncall triage queue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants