Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate TypedStorage, its derived classes, and all of their public methods #85303

Closed

Conversation

kurtamohler
Copy link
Collaborator

@kurtamohler kurtamohler commented Sep 19, 2022

Part of #85302

cc @albanD @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire @jansel @lezcano @fdrocha

BC-breaking note

Deprecate torch.Tensor.storage() in favor of torch.Tensor.untyped_storage()

Version 1.13

tensor.storage()

Version 2.0

tensor.untyped_storage()

Deprecate torch.TypedStorage and all its methods in favor of torch.UntypedStorage

Version 1.13

torch.TypedStorage(...)

Version 2.0

torch.UntypedStorage(...)

If you need to access individual elements in a storage as a particular dtype, you can simply create a tensor to view it:

torch.tensor(storage, dtype=...)

@kurtamohler kurtamohler added the module: python frontend For issues relating to PyTorch's Python frontend label Sep 19, 2022
@pytorch-bot
Copy link

pytorch-bot bot commented Sep 19, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/85303

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 Failures

As of commit 9ec684c:

The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you doublecheck that the deprecation warning only gets raised once? Thanks!

@kurtamohler kurtamohler force-pushed the deprecate-TypedStorage branch 2 times, most recently from 0532a15 to bc5fe01 Compare September 20, 2022 19:11
@kurtamohler
Copy link
Collaborator Author

kurtamohler commented Sep 20, 2022

@ezyang, I've added a test to check that it only gets raised once unless warnings are cleared. I didn't add all of the functions to the test, but I can if we want to be that thorough

@ezyang
Copy link
Contributor

ezyang commented Sep 20, 2022

nah that's good enough

@ezyang
Copy link
Contributor

ezyang commented Sep 20, 2022

A good follow up would be to make sure pytorch proper doesn't raise these deprecations

@ezyang
Copy link
Contributor

ezyang commented Sep 20, 2022

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered without a flag. This means that your change will be merged once all checks on your PR have passed (ETA: 0-4 Hours). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: The following mandatory check(s) failed (Rule superuser):

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

@kurtamohler
Copy link
Collaborator Author

A good follow up would be to make sure pytorch proper doesn't raise these deprecations

By this, do you mean because a DeprecationWarning doesn't raise by default? Actually, I forgot to use DeprecationWarning, I will fix that

@ezyang
Copy link
Contributor

ezyang commented Sep 20, 2022

By this, do you mean because a DeprecationWarning doesn't raise by default? Actually, I forgot to use DeprecationWarning, I will fix that

Different: what I mean is that we shouldn't trigger the deprecation warning if the user didn't explicitly use typed storage. If pytorch internally is hitting the DeprecationWarning, we should fix it (because otherwise it will spam users)

@kurtamohler
Copy link
Collaborator Author

I see, good point

@kurtamohler
Copy link
Collaborator Author

@ezyang , there are some places where internal calls are raising the warning. For instance, when serializing tensors.

I'm not sure what would be the best way to avoid internally generated warnings, but I thought of two options.

The first one is to use the warnings module filter to suppress the warning at every internal call site. But I don't think that's very good, since the filter has to do a string search and it will probably affect performance significantly.

The other is to add an underscored version of each function which does everything except for raising the warning. The public function would just raise the warning and then call the underscored function. We would change all the internal call sites to use the underscored version to avoid raising the warning. Something like this:

def _func(...):
    # do stuff

def func(...):
    _warn_typed_storage_removal()
    return _func(...)

I think I like this solution, but what do you think? Is there a better way? I tried googling a common solution for this and haven't found anything yet

@ezyang
Copy link
Contributor

ezyang commented Sep 22, 2022

No warn variants sgtm, esp if you only need a few / we have a strategy for getting rid of them

@kurtamohler
Copy link
Collaborator Author

Actually, I just realized that I can't do this for dunder functions. So instead, I think I'll have to add a kwarg _internal=False to all the functions. If it's False, the warning will get raised

@facebook-github-bot
Copy link
Contributor

/easycla

As part of the transition to the PyTorch Foundation, this project now requires contributions be covered under the new CLA. See #85559 for additional details.

This comment will trigger a new check of this PR. If you are already covered, you will simply see a new "EasyCLA" check that passes. If you are not covered, a bot will leave a new comment with a link to sign.

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Oct 4, 2022

CLA Signed

The committers listed above are authorized under a signed CLA.

@lezcano lezcano added ciflow/trunk Trigger trunk jobs on your pull request ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR labels Nov 1, 2022
@kurtamohler
Copy link
Collaborator Author

Looking through the logs, the warnings aren't very noisy anymore. Most of the test logs have less than 30 of them, and most of these are just from separate test file executions (separate processes). Only two of the jobs have more than 100 warnings.

I am sifting through all the logs now and soon I will post an overview of all the duplicate counts for each job and what I know about them. I think there's only a handful of root causes left now, and I think almost all of them can probably be investigated/fixed after this PR is merged (for instance, the duplicates from in-between warnings.catch_warnings() calls).

But I am sifting through the logs that generate the most duplicates now, and I'll post a summary soon

@ezyang
Copy link
Contributor

ezyang commented Nov 2, 2022

I think this is good enough, thank you for the detailed investigation. The catch_warnings footgun is I bet the biggest culprit, I wonder if there's a way we can workaround it in userland.

@kurtamohler
Copy link
Collaborator Author

kurtamohler commented Nov 2, 2022

I might as well post what I have right now.

Below are counts for how many times the warning was raised for each of the test CI jobs in the inductor, periodic, pull, and trunk workflows.

inductor workflow
warnings raised log file name
0 1_cuda11.6-py3.10-gcc7-sm86 test (inductor, 1, 7, linux.g5..txt
0 2_cuda11.6-py3.10-gcc7-sm86 test (inductor, 2, 7, linux.g5..txt
0 3_cuda11.6-py3.10-gcc7-sm86 test (inductor, 3, 7, linux.g5..txt
0 4_cuda11.6-py3.10-gcc7-sm86 test (inductor, 4, 7, linux.g5..txt
0 5_cuda11.6-py3.10-gcc7-sm86 test (inductor, 5, 7, linux.g5..txt
0 5_test.txt
0 6_cuda11.6-py3.10-gcc7-sm86 test (inductor, 6, 7, linux.g5..txt
0 7_cuda11.6-py3.10-gcc7-sm86 test (inductor, 7, 7, linux.g5..txt
periodic workflow
warnings raised log file name
0 1_buck-build-test buck-build-test.txt
0 1_linux-bionic-cuda11.6-py3.7-gcc7-debug test (default, 1,.txt
0 1_linux-bionic-cuda11.7-py3.7-gcc7-debug test (default, 1,.txt
0 1_linux-focal-rocm5.2-py3.8-slow test (slow, 1, 1, linux.ro.txt
14 2_win-vs2019-cuda11.7-py3 test (default, 2, 2, windows.8xla.txt
15 3_linux-bionic-cuda11.7-py3.7-gcc7-debug test (default, 3,.txt
18 4_linux-bionic-cuda11.6-py3.7-gcc7-debug test (default, 4,.txt
19 3_linux-bionic-cuda11.6-py3.7-gcc7-debug test (default, 3,.txt
20 1_linux-focal-rocm5.2-py3.8-distributed test (distributed,.txt
22 2_linux-bionic-cuda11.7-py3.7-gcc7-debug test (default, 2,.txt
27 3_win-vs2019-cuda11.7-py3 test (force_on_cpu, 1, 1, windows.txt
30 2_linux-bionic-cuda11.6-py3.7-gcc7-debug test (default, 2,.txt
30 4_linux-bionic-cuda11.7-py3.7-gcc7-debug test (default, 4,.txt
36 2_linux-focal-rocm5.2-py3.8-distributed test (distributed,.txt
51 1_win-vs2019-cuda11.7-py3 test (default, 1, 2, windows.8xla.txt
72 1_linux-bionic-cuda11.6-py3.9-gcc7 test (multigpu, 1, 1, li.txt
pull workflow
warnings raised log file name
0 1_linux-bionic-cuda11.6-py3.10-gcc7-bazel-test build-and-te.txt
0 1_linux-bionic-cuda11.6-py3.10-gcc7 test (default, 1, 4, li.txt
0 1_linux-focal-py3.7-clang10-onnx test (default, 1, 2, linux.txt
0 1_linux-focal-py3.7-clang7-asan test (default, 1, 5, linux..txt
0 1_linux-vulkan-bionic-py3.7-clang9 test (default, 1, 1, lin.txt
0 2_linux-focal-py3.7-clang10-onnx test (default, 2, 2, linux.txt
0 2_linux-focal-py3.7-clang7-asan test (default, 2, 5, linux..txt
0 30_build-and-test.txt
0 32_build-and-test.txt
0 34_build-and-test.txt
0 3_win-vs2019-cpu-py3 test (functorch, 1, 1, windows.4xlarge.txt
0 41_test.txt
0 46_test.txt
0 49_test.txt
0 52_test.txt
0 55_test.txt
0 58_test.txt
0 5_linux-focal-py3.7-gcc7 test (functorch, 1, 1, linux.2xlar.txt
0 61_test.txt
0 64_test.txt
0 6_linux-focal-py3.7-clang7-asan test (functorch, 1, 1, linu.txt
0 7_linux-bionic-py3.7-clang9 test (functorch, 1, 1, linux.2x.txt
0 8_linux-bionic-cuda11.6-py3.10-gcc7 test (functorch, 1, 1,.txt
0 8_linux-focal-py3.7-gcc7 test (backwards_compat, 1, 1, linu.txt
1 3_linux-focal-py3.7-gcc7 test (distributed, 1, 2, linux.2xl.txt
1 9_linux-bionic-cuda11.6-py3.10-gcc7 test (deploy, 1, 1, lin.txt
2 7_linux-focal-py3.7-gcc7 test (jit_legacy, 1, 1, linux.2xla.txt
5 4_linux-focal-py3.7-gcc7 test (distributed, 2, 2, linux.2xl.txt
5 7_linux-bionic-cuda11.6-py3.10-gcc7 test (distributed, 3, 3.txt
10 5_linux-focal-py3.7-clang7-asan test (default, 5, 5, linux..txt
13 3_linux-focal-py3.7-clang7-asan test (default, 3, 5, linux..txt
15 1_linux-bionic-py3_7-clang8-xla test (xla, 1, 1, linux.2xla.txt
19 6_linux-bionic-cuda11.6-py3.10-gcc7 test (distributed, 2, 3.txt
20 2_linux-bionic-cuda11.6-py3.10-gcc7 test (default, 2, 4, li.txt
20 2_linux-bionic-py3.7-clang9 test (default, 2, 2, linux.2xla.txt
21 1_win-vs2019-cpu-py3 test (default, 1, 2, windows.4xlarge).txt
23 3_linux-bionic-cuda11.6-py3.10-gcc7 test (default, 3, 4, li.txt
24 4_linux-bionic-cuda11.6-py3.10-gcc7 test (default, 4, 4, li.txt
26 4_linux-bionic-py3.7-clang9 test (crossref, 2, 2, linux.2xl.txt
27 2_linux-focal-py3.7-gcc7 test (default, 2, 2, linux.2xlarge.txt
34 6_linux-focal-py3.7-gcc7 test (docs_test, 1, 1, linux.2xlar.txt
36 3_linux-bionic-py3.7-clang9 test (crossref, 1, 2, linux.2xl.txt
39 1_linux-focal-py3.7-gcc7 test (default, 1, 2, linux.2xlarge.txt
40 2_win-vs2019-cpu-py3 test (default, 2, 2, windows.4xlarge).txt
42 4_linux-focal-py3.7-clang7-asan test (default, 4, 5, linux..txt
46 1_linux-bionic-py3.7-clang9 test (default, 1, 2, linux.2xla.txt
54 5_linux-bionic-cuda11.6-py3.10-gcc7 test (distributed, 1, 3.txt
69 2_linux-docs build-docs (python, linux.2xlarge, 30).txt
119 6_linux-bionic-py3.7-clang9 test (dynamo, 2, 2, linux.2xlar.txt
216 5_linux-bionic-py3.7-clang9 test (dynamo, 1, 2, linux.2xlar.txt
trunk workflow
warnings raised log file name
0 13_linux-bionic-cuda11.7-py3.10-gcc7 test (distributed, 3,.txt
0 1_android-emulator-build-test build-and-test.txt
0 1_linux-bionic-cuda11.7-py3.10-gcc7 test (default, 1, 4, li.txt
0 1_linux-bionic-py3.7-clang9-slow test (slow, 1, 1, linux.2x.txt
0 1_linux-focal-py3.7-clang7-tsan test (tsan, 1, 1, linux.2xl.txt
0 32_build-and-test.txt
0 35_test.txt
0 38_test.txt
0 3_macos-12-py3-arm64 test (functorch, 1, 1, macos-m1-12).txt
0 3_macos-12-py3-x86-64 test (functorch, 1, 1, macos-12).txt
0 41_test.txt
0 44_test.txt
0 47_test.txt
0 50_test.txt
0 52_Run MPS tests.txt
0 55_test.txt
0 58_test.txt
0 5_cuda11.6-py3.10-gcc7-sm86 test (functorch, 1, 1, linux.g5.txt
0 5_linux-bionic-cuda11.7-py3.10-gcc7 test (functorch, 1, 1,.txt
0 60_test.txt
0 6_linux-bionic-cuda11.7-py3.10-gcc7 test (slow, 1, 2, linux.txt
0 6_win-vs2019-cuda11.6-py3 test (functorch, 1, 1, windows.8x.txt
0 7_linux-bionic-cuda11.7-py3.10-gcc7 test (slow, 2, 2, linux.txt
2 1_macos-12-py3-arm64-mps Run MPS tests.txt
3 10_linux-bionic-cuda11.7-py3.10-gcc7 test (jit_legacy, 1, 1.txt
3 1_win-vs2019-cuda11.6-py3 test (default, 1, 5, windows.8xla.txt
6 3_win-vs2019-cuda11.6-py3 test (default, 3, 5, windows.8xla.txt
6 4_cuda11.6-py3.10-gcc7-sm86 test (default, 4, 4, linux.g5.4.txt
9 5_win-vs2019-cuda11.6-py3 test (default, 5, 5, windows.8xla.txt
12 4_linux-bionic-cuda11.7-py3.10-gcc7 test (default, 4, 4, li.txt
14 2_cuda11.6-py3.10-gcc7-sm86 test (default, 2, 4, linux.g5.4.txt
14 4_win-vs2019-cuda11.6-py3 test (default, 4, 5, windows.8xla.txt
17 2_linux-focal-rocm5.2-py3.8 test (default, 2, 2, linux.rocm.txt
18 3_cuda11.6-py3.10-gcc7-sm86 test (default, 3, 4, linux.g5.4.txt
20 12_linux-bionic-cuda11.7-py3.10-gcc7 test (distributed, 2,.txt
20 2_macos-12-py3-arm64 test (default, 2, 2, macos-m1-12).txt
24 1_parallelnative-linux-focal-py3.7-gcc7 test (default, 1, 2.txt
25 3_linux-bionic-cuda11.7-py3.10-gcc7 test (default, 3, 4, li.txt
27 7_win-vs2019-cuda11.6-py3 test (force_on_cpu, 1, 1, windows.txt
29 1_cuda11.6-py3.10-gcc7-sm86 test (default, 1, 4, linux.g5.4.txt
30 2_linux-bionic-cuda11.7-py3.10-gcc7 test (default, 2, 4, li.txt
31 8_linux-bionic-cuda11.7-py3.10-gcc7 test (nogpu_AVX512, 1,.txt
31 9_linux-bionic-cuda11.7-py3.10-gcc7 test (nogpu_NO_AVX2, 1,.txt
33 2_win-vs2019-cuda11.6-py3 test (default, 2, 5, windows.8xla.txt
42 2_parallelnative-linux-focal-py3.7-gcc7 test (default, 2, 2.txt
44 1_macos-12-py3-arm64 test (default, 1, 2, macos-m1-12).txt
46 1_linux-focal-rocm5.2-py3.8 test (default, 1, 2, linux.rocm.txt
46 2_macos-12-py3-x86-64 test (default, 2, 2, macos-12).txt
58 11_linux-bionic-cuda11.7-py3.10-gcc7 test (distributed, 1,.txt
75 1_macos-12-py3-x86-64 test (default, 1, 2, macos-12).txt

Most of these warnings are raised because test code explicitly uses the deprecated functions, and there are multiple warnings in one job because they get triggered by different test files, which are run as separate processes.

But some of the logs show actual duplicates--multiple warnings generated by the execution of just one test file. There seem to be only a few root causes remaining for these. Here are some that I looked into, but haven't looked into all of them yet.

click to expand

75 from 1_macos-12-py3-x86-64 test (default, 1, 2, macos-12).txt

  • 6 from test_serialization.py
  • ~50 from test_quantization.py

58 from 11_linux-bionic-cuda11.7-py3.10-gcc7 test (distributed, 1,.txt

  • 5 from distributed/test_dynamo_distributed.py
  • 52 from distributed/fsdp/test_fsdp_mixed_precision.py

46 from 2_macos-12-py3-x86-64 test (default, 2, 2, macos-12).txt

  • 9 from test_view_ops.py -k test_as_strided_gradients_cpu because of warnings.catch_warnings()
  • 9 from test_view_ops.py -k test_as_strided_gradients_lazy because of warnings.catch_warnings()
  • 6 from test_torch.py

46 from 1_linux-focal-rocm5.2-py3.8 test (default, 1, 2, linux.rocm.txt

  • 6 from test_serialization.py
  • 9 from test_view_ops.py -k test_as_strided_gradients_cuda because of warnings.catch_warnings()
  • 8 from test_nn.py -k test_share_memory

44 from 1_macos-12-py3-arm64 test (default, 1, 2, macos-m1-12).txt

  • 9 from test_view_ops.py -k test_as_strided_gradients_cpu because of warnings.catch_warnings()
  • 6 from test_torch.py

42 2_parallelnative-linux-focal-py3.7-gcc7 test (default, 2, 2.txt

  • 6 from test_serialization.py
  • 9 from test_view_ops.py -k test_as_strided_gradients_cpu because of warnings.catch_warnings()
  • 9 from test_view_ops.py -k test_as_strided_gradients_lazy because of warnings.catch_warnings()

33 2_win-vs2019-cuda11.6-py3 test (default, 2, 5, windows.8xla.txt

  • 5 from test_serialization.py
  • 9 from test_view_ops.py -k test_as_strided_gradients_cuda because of warnings.catch_warnings()
  • 8 from test_nn.py -k test_share_memory

31 9_linux-bionic-cuda11.7-py3.10-gcc7 test (nogpu_NO_AVX2, 1,.txt

  • 5 from test_serialization.py

Also, not show above, crossref and dynamo jobs have an unusual amount of duplicates between tests in a single file. I'm not sure why. It does reproduce locally if I set PYTORCH_TEST_WITH_CROSSREF=1 or PYTORCH_TEST_WITH_DYNAMO=1

@ezyang
Copy link
Contributor

ezyang commented Nov 2, 2022

re the crossref failures, part of the crossref process involves monkeying around with storage api. Right now it's using the deprecated api which we should fix

            self.storage_memo[swr] = (
                callback(
                    lambda: torch.empty(s.size(), dtype=torch.uint8, device="meta")
                )
                .storage()
                .untyped()

Probably dynamo is similar.

@kurtamohler
Copy link
Collaborator Author

@ezyang, where is that code snippet from? Having trouble finding it

@ezyang
Copy link
Contributor

ezyang commented Nov 2, 2022

torch/_subclasses/meta_utils.py

@kurtamohler
Copy link
Collaborator Author

kurtamohler commented Nov 2, 2022

Oh, I already changed that to use the internal functions, so that's not where the duplicates are coming from.

And actually, I misspoke--the latest update I made fixed most of the duplicates from crossref. As the table for the pull workflow above shows, the two crossref jobs only have 26 and 36 warnings now--and looking at the logs, I can see that these are mostly due to warning.catch_warnings().

The two dynamo jobs, on the other hand, still have the highest warning counts. Here's a summary of most of the duplicates from one of them:

216 from 5_linux-bionic-py3.7-clang9 test (dynamo, 1, 2, linux.2xlar.txt

  • 4 from PYTORCH_TEST_WITH_DYNAMO=1 python test/test_indexing.py -k test_advancedindex_cpu_float64
  • 82 from PYTORCH_TEST_WITH_DYNAMO=1 python test/test_view_ops.py
    • 12 from test_as_strided_gradients_cpu because of warnings.catch_warnings()
    • 5 from test_flatten_view_cpu
    • 10 from test_as_strided_gradients_lazy because of warnings.catch_warnings()
  • 13 from PYTORCH_TEST_WITH_DYNAMO=1 python test/test_bundled_inputs.py

Looking at tracebacks added to the warning messages, most (if not all) of these are triggered by calls to the public TypedStorage function calls in the test code, not by any public function calls in torch/_dynamo/. I don't know why enabling dynamo creates duplicates--it seems possible that something is resetting the warning filter. But I think replacing some of the public function calls in the test code with private function calls is probably a good idea. Even though it doesn't fix the root cause, it will make the tests quieter. Just replacing two occurrences in test_view_ops.py drops the warning count for that file in half in my local build

@kurtamohler
Copy link
Collaborator Author

kurtamohler commented Nov 2, 2022

The warnings.catch_warnings() issue doesn't seem to have a workaround at the moment--none has been suggested on the issue (python/cpython#73858), which was opened in 2017

@kurtamohler
Copy link
Collaborator Author

kurtamohler commented Nov 2, 2022

There is somewhat of a workaround, but it won't work as shown below in practice because if there are warnings raised inside the catch_warnings context, then the version of the restored registry has to be incremented accordingly. I haven't found a way to determine what version number the warning module expects, but there might be some way

click to expand
import warnings

for _ in range(3):
    try:
        saved_registry = __warningregistry__
    except NameError:
        saved_registry = None

    with warnings.catch_warnings():
        pass

    if saved_registry is not None:
        saved_registry['version'] += 2
        __warningregistry__ = saved_registry

    warnings.warn('warning')
    warnings.warn(RuntimeWarning('another warning'))

Output:

/home/endoplasm/tmp/tmp.py:16: UserWarning: warning
  warnings.warn('warning')
/home/endoplasm/tmp/tmp.py:17: RuntimeWarning: another warning
  warnings.warn(RuntimeWarning('another warning'))

EDIT: I guess we could find the expected version number by raising a sort of dummy warning after exiting catch_warning--which will then update the __warningregistry__ with the correct version, and then we can overwrite it with the saved_registry and correct the version number on it. But then the problem is that we'd be emitting the dummy warning after every catch_warning call...

@ezyang
Copy link
Contributor

ezyang commented Nov 3, 2022

dummy warning after catch warning doesn't seem like a huge problem

@kurtamohler
Copy link
Collaborator Author

Yeah we can just filter out the dummy warning and it will still update the version number. This seems to basically work:

click to expand
import warnings

class my_catch_warnings(warnings.catch_warnings):

    def __enter__(self):
        global __warningregistry__
        try:
            self._saved_registry = __warningregistry__.copy()
        except NameError:
            self._saved_registry = None

        super().__enter__()

    def __exit__(self, *args, **kwargs):
        super().__exit__(*args, **kwargs)

        if self._saved_registry is not None:
            global __warningregistry__
            dummy_message = 'dummy warning to find out warning registry version number'
            warnings.filterwarnings('ignore', message=dummy_message)
            warnings.warn(dummy_message)
            self._saved_registry['version'] = __warningregistry__['version']
            __warningregistry__ = self._saved_registry

for i in range(3):
    with my_catch_warnings():
        warnings.warn('inside catch_warnings')

    warnings.warn('this should only emit once')
    warnings.warn(RuntimeWarning('and this should only emit once too'))

Output:

/home/endoplasm/develop/python_stuff/catch_warnings_workaround.py:27: UserWarning: inside catch_warnings
  warnings.warn('inside catch_warnings')
/home/endoplasm/develop/python_stuff/catch_warnings_workaround.py:29: UserWarning: this should only emit once
  warnings.warn('this should only emit once')
/home/endoplasm/develop/python_stuff/catch_warnings_workaround.py:30: RuntimeWarning: and this should only emit once too
  warnings.warn(RuntimeWarning('and this should only emit once too'))
/home/endoplasm/develop/python_stuff/catch_warnings_workaround.py:27: UserWarning: inside catch_warnings
  warnings.warn('inside catch_warnings')
/home/endoplasm/develop/python_stuff/catch_warnings_workaround.py:27: UserWarning: inside catch_warnings
  warnings.warn('inside catch_warnings')

facebook-github-bot pushed a commit to pytorch/multipy that referenced this pull request Nov 7, 2022
Summary:
As part of my PR pytorch/pytorch#85303, I'm renaming `TypedStorage._storage` to `TypedStorage._untyped_storage` for better clarity, and since MultiPy depends on the old name, a CI job is failing on my PR. There is a public function `TypedStorage.untyped()` which is probably better to use in this case, and doing so will fix the CI failure for me

cc ezyang

Pull Request resolved: #221

Reviewed By: priyaramani

Differential Revision: D41086182

Pulled By: PaliC

fbshipit-source-id: 494a05c51b2e9ce29724f9bfa2728eee26e43ff7
@kurtamohler
Copy link
Collaborator Author

@pytorchbot merge -f "upstream CI failure"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@kurtamohler
Copy link
Collaborator Author

BTW, before merging, I checked to make sure that the most recent update had similar warning counts in CI

kulinseth pushed a commit to kulinseth/pytorch that referenced this pull request Dec 10, 2022
pytorchmergebot pushed a commit that referenced this pull request Dec 12, 2022
#85303 added a patch to `torch.testing.assert_close` to handle `torch.storage.TypedStorage`'s. This change is not reflected in the docs and is not intended for the public API. This PR removes the patch ones again and moves the behavior to `TestCase.assertEqual` instead. Meaning, `TypedStorage`'s are again not supported by the public API, but the behavior is the same for all internal use cases.

Pull Request resolved: #89557
Approved by: https://github.com/kurtamohler, https://github.com/mruberry
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/inductor ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR ciflow/trunk Trigger trunk jobs on your pull request cla signed Merged module: dynamo module: python frontend For issues relating to PyTorch's Python frontend open source release notes: distributed (c10d) release notes category with-ssh
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants