-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix some typed storage is deprecated warnings. #89867
Conversation
Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/89867
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 FailuresAs of commit 3958eb5: The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Signed-off-by: Edward Z. Yang <ezyangfb.com> ghstack-source-id: d451854c9cc7375261717d599c8b98dba354b8ed Pull Request resolved: #89867
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM
If you want to test this, you can do the same as this test:
Lines 7171 to 7211 in a029ec2
def test_warn_on_not_implemented_with_fallback(self): | |
_, _, _, op = self._get_not_implemented_op() | |
script = f""" | |
import os | |
# MUST happen before pytorch's import | |
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" | |
import warnings | |
with warnings.catch_warnings(record=True) as w: | |
import torch | |
if len(w) > 0: | |
print(w) | |
exit(1) | |
# This should run just fine and raise warning about perf | |
with warnings.catch_warnings(record=True) as w: | |
{op} | |
if len(w) != 1: | |
print(w) | |
exit(2) | |
""" | |
try: | |
subprocess.check_output( | |
[sys.executable, '-W', 'all', '-c', script], | |
stderr=subprocess.STDOUT, | |
# On Windows, opening the subprocess with the default CWD makes `import torch` | |
# fail, so just set CWD to this script's directory | |
cwd=os.path.dirname(os.path.realpath(__file__)),) | |
except subprocess.CalledProcessError as e: | |
if e.returncode == 1: | |
self.assertTrue(False, "There was a warning when importing torch when PYTORCH_ENABLE_MPS_FALLBACK is set." + | |
e.output.decode("utf-8")) | |
elif e.returncode == 2: | |
self.assertTrue(False, "There wasn't exactly one warning when running not implemented op with " | |
f"PYTORCH_ENABLE_MPS_FALLBACK set. {e.output}") | |
else: | |
self.assertTrue(False, "Running a not implemented op failed even though PYTORCH_ENABLE_MPS_FALLBACK is set. " + | |
e.output.decode("utf-8")) |
Signed-off-by: Edward Z. Yang <ezyangfb.com> [ghstack-poisoned]
Signed-off-by: Edward Z. Yang <ezyangfb.com> ghstack-source-id: dedb6aafef2ff6b4789bbd9af39d7be403609245 Pull Request resolved: #89867
functorch/_src/aot_autograd.py
Outdated
@@ -399,7 +399,7 @@ def filter_and_record_aliased_outs(outputs): | |||
# This will be more complicated when you have multiple _base tensors aliasing the same | |||
# underlying storage, when we eventually handle that. | |||
# We'll need to ensure that we generate the view off of the right base. | |||
inp_storage_refs = {StorageWeakRef(inpt.storage()): idx for idx, inpt in enumerate(flat_f_args)} | |||
inp_storage_refs = {StorageWeakRef(inpt._storage()): idx for idx, inpt in enumerate(flat_f_args)} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just double checking that you remember that Tensor._storage()
gives an UntypedStorage
, whereas Tensor.storage()
gave a TypedStorage
. But if these changes aren't causing any failures, then that's probably okay
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I want untyped here
Signed-off-by: Edward Z. Yang <ezyangfb.com> cc zou3519 Chillee samdow soumith [ghstack-poisoned]
Signed-off-by: Edward Z. Yang <ezyangfb.com> ghstack-source-id: 26e382d7e38b77b8614dc5b46b901860deaa4bec Pull Request resolved: #89867
Signed-off-by: Edward Z. Yang <ezyangfb.com> cc zou3519 Chillee samdow soumith [ghstack-poisoned]
Signed-off-by: Edward Z. Yang <ezyangfb.com> ghstack-source-id: 521dcf5a07037eda79d05fd4c51d8368b29402b0 Pull Request resolved: #89867
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 2 additional jobs have failed, first few of them are: inductor ,inductor / cuda11.6-py3.10-gcc7-sm86 / test (inductor_timm, 2, 2, linux.g5.4xlarge.nvidia.gpu) Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge -f "flaky ci" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Signed-off-by: Edward Z. Yang <ezyang@fb.com> Pull Request resolved: pytorch#89867 Approved by: https://github.com/albanD
Stack from ghstack (oldest at bottom):
Signed-off-by: Edward Z. Yang ezyang@fb.com
cc @zou3519 @Chillee @samdow @soumith