-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BE][Easy] export explicitly imported public submodules #127703
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/127703
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (3 Unrelated Failures)As of commit d7fd93d with merge base 75b0720 (): FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
UNSTABLE - The following job failed but was likely due to flakiness present on trunk and has been marked as unstable:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Add top-level submodules `torch.{storage,serialization,functional,amp,utils,overrides,types}` Resolves pytorch#126401 ghstack-source-id: 4be1063ca731cfbb01266b1ded2e94e5cf43ab9a Pull Request resolved: pytorch#127703
Add top-level submodules `torch.{storage,serialization,functional,amp,utils,overrides,types}` Resolves pytorch#126401 ghstack-source-id: 139555ce7eccae5e42562d5759cd4ae79fc00881 Pull Request resolved: pytorch#127703
Some of these changes seem related to: #127688 and probably should wait on that. |
@@ -6,21 +6,21 @@ | |||
future. | |||
""" | |||
|
|||
from . import lr_scheduler, swa_utils |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is ufmt forcing the removal of all these relative imports?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. This is updated by hand.
Absolute imports, or relative imports from siblings, are recommended by PEP8.
Absolute imports are recommended, as they are usually more readable and tend to be better behaved (or at least give better error messages) if the import system is incorrectly configured (such as when a directory inside a package ends up on
sys.path
):import mypkg.sibling from mypkg import sibling from mypkg.sibling import exampleHowever, explicit relative imports are an acceptable alternative to absolute imports, especially when dealing with complex package layouts where using absolute imports would be unnecessarily verbose:
from . import sibling from .sibling import exampleStandard library code should avoid complex package layouts and always use absolute imports.
We can enforce this by adding the ruff rule TID
(flake8-tidy-imports
) with:
[tool.ruff.lint.flake8-tidy-imports]
ban-relative-imports = "all" # defaults to "parents"; see https://docs.astral.sh/ruff/settings/#lintflake8-tidy-imports
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't this affect import ordering which can be bad for our Python submodules in torch that do have side effects (or cause cyclic import errors where non existed before).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing relative import to identical absolute import does not change the side effects of the import statement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't it change the import order though?
Where the relative import would actually trigger the import of the file while the absolute import would expect the partially initialized module to already contain the corresponding entry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is made based on:
- The relative import will convert to absolute import in bytecode eval loop:
$ echo 'from torch.optim import adam' | python3 -m dis -
0 0 RESUME 0
1 2 LOAD_CONST 0 (0)
4 LOAD_CONST 1 (('adam',))
6 IMPORT_NAME 0 (torch.optim)
8 IMPORT_FROM 1 (adam)
10 STORE_NAME 1 (adam)
12 POP_TOP
14 RETURN_CONST 2 (None)
$ echo 'from . import adam' | python3 -m dis -
0 0 RESUME 0
1 2 LOAD_CONST 0 (1)
4 LOAD_CONST 1 (('adam',))
6 IMPORT_NAME 0
8 IMPORT_FROM 1 (adam)
10 STORE_NAME 1 (adam)
12 POP_TOP
14 RETURN_CONST 2 (None)
from . import adam
is convert to __import__(f'{__name__}.adam')
where __name__ = 'torch.optim'
.
They are semantically identical.
- See [BE][Easy] export explicitly imported public submodules #127703 (comment): Absolute imports, or relative imports from siblings, are recommended by PEP8.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about absolute paths when combined with circular dependencies? Example:
torch/optim/init.py import torch.optim._multi_tensor
and
torch/optim/_multi_tenoser/init.py import torch.optim
This pattern, combined with absolute import paths is causing unpredictable and hard to debug failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This pattern, combined with absolute import paths is causing unpredictable and hard to debug failures.
You can add a raise ImportError
statement at the top of the file, when looking at the traceback of python3 -c 'import torch'
. This can be used to find the first dependent of the submodule.
Add top-level submodules `torch.{storage,serialization,functional,amp,utils,overrides,types}` Resolves pytorch#126401 ghstack-source-id: 08adbe6e709d2963258855e1a9ca156b4cbd5f88 Pull Request resolved: pytorch#127703
Add top-level submodules `torch.{storage,serialization,functional,amp,utils,overrides,types}` Resolves pytorch#126401 ghstack-source-id: 08adbe6e709d2963258855e1a9ca156b4cbd5f88 Pull Request resolved: pytorch#127703
---- - Sort import via `usort` - Change relative import `from . import xxx` to absolute import `from torch import xxx` Pull Request resolved: #127708 Approved by: https://github.com/ezyang ghstack dependencies: #127703
Pull Request resolved: #127709 Approved by: https://github.com/ezyang ghstack dependencies: #127703, #127708
Pull Request resolved: #127710 Approved by: https://github.com/ezyang ghstack dependencies: #127703, #127708, #127709
Add top-level submodules `torch.{storage,serialization,functional,amp,utils,overrides,types}` Resolves pytorch#126401 ghstack-source-id: 95329001fef0a504fd80cf02afc1ea379eecb05e Pull Request resolved: pytorch#127703
Add top-level submodules `torch.{storage,serialization,functional,amp,overrides,types}` Pull Request resolved: pytorch#127703 Approved by: https://github.com/ezyang
---- - Sort import via `usort` - Change relative import `from . import xxx` to absolute import `from torch import xxx` Pull Request resolved: pytorch#127708 Approved by: https://github.com/ezyang ghstack dependencies: pytorch#127703
Pull Request resolved: pytorch#127709 Approved by: https://github.com/ezyang ghstack dependencies: pytorch#127703, pytorch#127708
Pull Request resolved: pytorch#127710 Approved by: https://github.com/ezyang ghstack dependencies: pytorch#127703, pytorch#127708, pytorch#127709
Add top-level submodules `torch.{storage,serialization,functional,amp,overrides,types}` Pull Request resolved: pytorch#127703 Approved by: https://github.com/ezyang
---- - Sort import via `usort` - Change relative import `from . import xxx` to absolute import `from torch import xxx` Pull Request resolved: pytorch#127708 Approved by: https://github.com/ezyang ghstack dependencies: pytorch#127703
Pull Request resolved: pytorch#127709 Approved by: https://github.com/ezyang ghstack dependencies: pytorch#127703, pytorch#127708
Pull Request resolved: pytorch#127710 Approved by: https://github.com/ezyang ghstack dependencies: pytorch#127703, pytorch#127708, pytorch#127709
Summary: - PR pytorch#127703 introduced a circular dependency `torch/optim/__init__.py` imports `torch.optim._multi_tensor` and `torch.optim._multi_tensor/_init__.py` imports `torch.optim` This seemed to work fine (green signals everywhere) but caused some internal test failures after it landed; an infinite recursion during import. - PR pytorch#128875 attempted to fix this by removing the import from `torch/optim/__init__.py`. This seemed to work fine: green signals everywhere and the failing tests started passing but a smaller number of tests started failing; unable to import `torch.optim._multi_tensor` - This diff re-introduces the import but after `torch.optim` is fully initialized Test Plan: CI signals Differential Revision: D58792889
) Summary: Pull Request resolved: pytorch#129095 - PR pytorch#127703 introduced a circular dependency `torch/optim/__init__.py` imports `torch.optim._multi_tensor` and `torch.optim._multi_tensor/_init__.py` imports `torch.optim` This seemed to work fine (green signals everywhere) but caused some internal test failures after it landed; an infinite recursion during import. - PR pytorch#128875 attempted to fix this by removing the import from `torch/optim/__init__.py`. This seemed to work fine: green signals everywhere and the failing tests started passing but a smaller number of tests started failing; unable to import `torch.optim._multi_tensor` - This diff re-introduces the import but after `torch.optim` is fully initialized Test Plan: CI signals Differential Revision: D58792889
) Summary: Pull Request resolved: pytorch#129095 - PR pytorch#127703 introduced a circular dependency `torch/optim/__init__.py` imports `torch.optim._multi_tensor` and `torch.optim._multi_tensor/_init__.py` imports `torch.optim` This seemed to work fine (green signals everywhere) but caused some internal test failures after it landed; an infinite recursion during import. - PR pytorch#128875 attempted to fix this by removing the import from `torch/optim/__init__.py`. This seemed to work fine: green signals everywhere and the failing tests started passing but a smaller number of tests started failing; unable to import `torch.optim._multi_tensor` - This diff re-introduces the import but after `torch.optim` is fully initialized Test Plan: CI signals Differential Revision: D58792889
) Summary: Pull Request resolved: pytorch#129095 - PR pytorch#127703 introduced a circular dependency `torch/optim/__init__.py` imports `torch.optim._multi_tensor` and `torch.optim._multi_tensor/_init__.py` imports `torch.optim` This seemed to work fine (green signals everywhere) but caused some internal test failures after it landed; an infinite recursion during import. - PR pytorch#128875 attempted to fix this by removing the import from `torch/optim/__init__.py`. This seemed to work fine: green signals everywhere and the failing tests started passing but a smaller number of tests started failing; unable to import `torch.optim._multi_tensor` - This diff re-introduces the import but after `torch.optim` is fully initialized Test Plan: CI signals Differential Revision: D58792889
Summary: This avoids some internal test failures caused by "infinite" recursion during import. Example: P1430048846 A bit of history: - PR pytorch#127703 introduced a circular dependency `torch/optim/__init__.py` imports `torch.optim._multi_tensor` and `torch.optim._multi_tensor/_init__.py` imports `torch.optim` This seemed to work fine (green signals everywhere) but caused some internal test failures after it landed; an infinite recursion during import. - PR pytorch#128875 attempted to fix this by removing the import from `torch/optim/__init__.py`. This seemed to work fine: green signals everywhere and the failing tests started passing but a smaller number of tests started failing; unable to import `torch.optim._multi_tensor` - This diff re-introduces the import but avoids the infinite recursion by using relative package paths Differential Revision: D58815471
…ytorch#129132) Summary: Pull Request resolved: pytorch#129132 This avoids some internal test failures caused by "infinite" recursion during import. Example: P1430048846 A bit of history: - PR pytorch#127703 introduced a circular dependency `torch/optim/__init__.py` imports `torch.optim._multi_tensor` and `torch.optim._multi_tensor/_init__.py` imports `torch.optim` This seemed to work fine (green signals everywhere) but caused some internal test failures after it landed; an infinite recursion during import. - PR pytorch#128875 attempted to fix this by removing the import from `torch/optim/__init__.py`. This seemed to work fine: green signals everywhere and the failing tests started passing but a smaller number of tests started failing; unable to import `torch.optim._multi_tensor` - This diff re-introduces the import but avoids the infinite recursion by using relative package paths Test Plan: CI signals Previously failing tests are passing: * https://www.internalfb.com/intern/testinfra/testrun/4785074840480058 * https://www.internalfb.com/intern/testinfra/testrun/281475349103659 * https://www.internalfb.com/intern/testinfra/testrun/11540474083575936 Differential Revision: D58815471
…ytorch#129132) Summary: Pull Request resolved: pytorch#129132 This avoids some internal test failures caused by "infinite" recursion during import. Example: P1430048846 A bit of history: - PR pytorch#127703 introduced a circular dependency `torch/optim/__init__.py` imports `torch.optim._multi_tensor` and `torch.optim._multi_tensor/_init__.py` imports `torch.optim` This seemed to work fine (green signals everywhere) but caused some internal test failures after it landed; an infinite recursion during import. - PR pytorch#128875 attempted to fix this by removing the import from `torch/optim/__init__.py`. This seemed to work fine: green signals everywhere and the failing tests started passing but a smaller number of tests started failing; unable to import `torch.optim._multi_tensor` - This diff re-introduces the import but avoids the infinite recursion by using relative package paths Test Plan: CI signals Previously failing tests are passing: * https://www.internalfb.com/intern/testinfra/testrun/4785074840480058 * https://www.internalfb.com/intern/testinfra/testrun/281475349103659 * https://www.internalfb.com/intern/testinfra/testrun/11540474083575936 Differential Revision: D58815471
@fbgheith reports that this broke some lazy import logic (still trying |
This has been a big source of headache where depending on the exact ordering if imports in the pytorch client code one of three things would happen: (1) great majority: things work fine This is just empirical data for this pair of dependencies. |
We will be able to do this after we enable |
Stack from ghstack (oldest at bottom):
torch.utils.data
#127705torch.utils.data
#127704torch.utils
#128342torch/*.py
#127707torch/storage.py
#127706torch/__init__.py
#127710__all__
intorch/__init__.py
#127709torch/__init__.py
#127708Add top-level submodules
torch.{storage,serialization,functional,amp,overrides,types}
cc @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang