-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[inductor] Don't import torchvision #93027
Conversation
Fixes #93019 Since PyTorch regularly breaks binary compatibility, `torchvision` must be compiled with the exact same version of PyTorch. If not, then importing it may cause mysterious failures at runtime due to binary incompatibility. This fixes the issue by delaying the `make_fallback` call for `torchvision.roi_align` until the operator appears in a graph being lowered, by which point the user must have imported torchvision themself. [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/93027
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit dffb228: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Fixes #93019 Since PyTorch regularly breaks binary compatibility, `torchvision` must be compiled with the exact same version of PyTorch. If not, then importing it may cause mysterious failures at runtime due to binary incompatibility. This fixes the issue by delaying the `make_fallback` call for `torchvision.roi_align` until the operator appears in a graph being lowered, by which point the user must have imported torchvision themself. ghstack-source-id: 0b492fec0b91e6996f3b7ac0e7992f607aa39dd1 Pull Request resolved: #93027
Fixes #93019 Since PyTorch regularly breaks binary compatibility, `torchvision` must be compiled with the exact same version of PyTorch. If not, then importing it may cause mysterious failures at runtime due to binary incompatibility. This fixes the issue by delaying the `make_fallback` call for `torchvision.roi_align` until the operator appears in a graph being lowered, by which point the user must have imported torchvision themself. cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire [ghstack-poisoned]
Fixes #93019 Since PyTorch regularly breaks binary compatibility, `torchvision` must be compiled with the exact same version of PyTorch. If not, then importing it may cause mysterious failures at runtime due to binary incompatibility. This fixes the issue by delaying the `make_fallback` call for `torchvision.roi_align` until the operator appears in a graph being lowered, by which point the user must have imported torchvision themself. ghstack-source-id: 052c5d9f87803b4cf31b8fd88f32bdd4f1886122 Pull Request resolved: #93027
Fixes pytorch#93019 Since PyTorch regularly breaks binary compatibility, `torchvision` must be compiled with the exact same version of PyTorch. If not, then importing it may cause mysterious failures at runtime due to binary incompatibility. This fixes the issue by delaying the `make_fallback` call for `torchvision.roi_align` until the operator appears in a graph being lowered, by which point the user must have imported torchvision themself. ghstack-source-id: 052c5d9f87803b4cf31b8fd88f32bdd4f1886122 Pull Request resolved: pytorch#93027
Fixes #93019 Since PyTorch regularly breaks binary compatibility, `torchvision` must be compiled with the exact same version of PyTorch. If not, then importing it may cause mysterious failures at runtime due to binary incompatibility. This fixes the issue by delaying the `make_fallback` call for `torchvision.roi_align` until the operator appears in a graph being lowered, by which point the user must have imported torchvision themself. cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire [ghstack-poisoned]
Fixes #93019 Since PyTorch regularly breaks binary compatibility, `torchvision` must be compiled with the exact same version of PyTorch. If not, then importing it may cause mysterious failures at runtime due to binary incompatibility. This fixes the issue by delaying the `make_fallback` call for `torchvision.roi_align` until the operator appears in a graph being lowered, by which point the user must have imported torchvision themself. ghstack-source-id: f8e636fc1f1b173abd2a1be555dde61d1da9f593 Pull Request resolved: #93027
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: This PR is too stale; the last push date was more than 3 days ago. Please rebase and try again. You can rebase by leaving the following comment on this PR: Details for Dev Infra teamRaised by workflow job |
Fixes #93019 Since PyTorch regularly breaks binary compatibility, `torchvision` must be compiled with the exact same version of PyTorch. If not, then importing it may cause mysterious failures at runtime due to binary incompatibility. This fixes the issue by delaying the `make_fallback` call for `torchvision.roi_align` until the operator appears in a graph being lowered, by which point the user must have imported torchvision themself. cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx desertfire [ghstack-poisoned]
Fixes #93019 Since PyTorch regularly breaks binary compatibility, `torchvision` must be compiled with the exact same version of PyTorch. If not, then importing it may cause mysterious failures at runtime due to binary incompatibility. This fixes the issue by delaying the `make_fallback` call for `torchvision.roi_align` until the operator appears in a graph being lowered, by which point the user must have imported torchvision themself. ghstack-source-id: 6d53563ffb3b32b94c9633ef67db2515d9351728 Pull Request resolved: #93027
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…n-dev-setup * origin: (898 commits) Move dynamo.optimizations.distributed to backends (pytorch#93408) Remove cuda 11.6 from nightly (pytorch#93979) Refactor dynamo register_backend/BACKENDS (pytorch#93389) Remove cuda 11.6 from CI replace with 11.7 (pytorch#93406) [Dynamo] Rename `GuardBuilder.guarded_code` -> `check_fn_manager` (pytorch#93934) Revert "Remove CUDA 11.6 from nightly builds (pytorch#93404)" Revert "[inductor] fix crash issue when input is a view tensor (pytorch#90150)" Basic Validation for FSDP `state_dict` transformations of modules with persistent buffers (pytorch#93396) Merge Inductor perf smoke test with other inductor CI tests (pytorch#93395) [inductor] Don't import torchvision (pytorch#93027) [FSDP][3/N] Refactor `summon_full_params` unit tests (pytorch#92298) [FSDP][2/N] `_summon_full_params` -> `_unshard_params` (pytorch#92297) Remove CUDA 11.6 from nightly builds (pytorch#93404) Mark buffers that reuse other buffers (pytorch#93329) Refactor to allow reuse of SchedulerNode.allocate (pytorch#93328) retire sparse_mask_helper (pytorch#91714) update fbgemm third party (pytorch#93907) [inductor] fix crash issue when input is a view tensor (pytorch#90150) [Inductor] add config for weight prepacking (pytorch#93811) Check for none for NNModuleVariable.__module__ (pytorch#93326) ...
Stack from ghstack (oldest at bottom):
Fixes #93019
Since PyTorch regularly breaks binary compatibility,
torchvision
must becompiled with the exact same version of PyTorch. If not, then importing it may
cause mysterious failures at runtime due to binary incompatibility.
This fixes the issue by delaying the
make_fallback
call fortorchvision.roi_align
until the operator appears in a graph being lowered, bywhich point the user must have imported torchvision themself.
cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire