-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add USE_NAMEDTENSOR compilation flag. #20162
Conversation
Sets the NAMEDTENSOR_ENABLED macro (in cpp) and tools.setup_helpers.env.NAMEDTENSOR_ENABLED. Test Plan: - Compile with NAMEDTENSOR_ENABLED=1. Verify that `torch.__config__.show()` has -DNAMEDTENSOR_ENABLED in the CXX flags. - Compile without the flag. Verify that `-DNAMEDTENSOR_ENABLED` is not present in `torch.__config__.show()`.
Add USE_NAMEDTENSOR compilation flag. Sets the NAMEDTENSOR_ENABLED macro (in cpp) and tools.setup_helpers.env.NAMEDTENSOR_ENABLED. Test Plan: - Compile with NAMEDTENSOR_ENABLED=1. Verify that `torch.__config__.show()` has -DNAMEDTENSOR_ENABLED in the CXX flags. - Compile without the flag. Verify that `-DNAMEDTENSOR_ENABLED` is not present in `torch.__config__.show()`. gh-metadata: pytorch pytorch 20162 gh/zou3519/32/head
@@ -175,6 +175,7 @@ def run_cmake(version, | |||
USE_CUDA=USE_CUDA, | |||
USE_DISTRIBUTED=USE_DISTRIBUTED, | |||
USE_FBGEMM=not (check_env_flag('NO_FBGEMM') or check_negative_env_flag('USE_FBGEMM')), | |||
NAMEDTENSOR_ENABLED=(check_env_flag('USE_NAMEDTENSOR') or check_negative_env_flag('NO_NAMEDTENSOR')), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do you need to check the env flag here when other examples (e.g. USE_MKLDNN) don't need to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gchanan the other examples do, somewhere down the line. Here is where the USE_MKLDNN flag is defined, for example:
pytorch/tools/setup_helpers/env.py
Line 63 in bc53984
USE_MKLDNN = check_env_flag('USE_MKLDNN', 'OFF' if IS_PPC or IS_ARM else 'ON') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I don't know why we don't do things in one place, but if it works, it works!
Add USE_NAMEDTENSOR compilation flag. Sets the NAMEDTENSOR_ENABLED macro (in cpp) and tools.setup_helpers.env.NAMEDTENSOR_ENABLED. Test Plan: - Compile with NAMEDTENSOR_ENABLED=1. Verify that `torch.__config__.show()` has -DNAMEDTENSOR_ENABLED in the CXX flags. - Compile without the flag. Verify that `-DNAMEDTENSOR_ENABLED` is not present in `torch.__config__.show()`. gh-metadata: pytorch pytorch 20162 gh/zou3519/32/head
Summary: Pull Request resolved: pytorch/pytorch#20162 ghimport-source-id: 0efcd67f04aa087e1dd5faeee550daa2f13ef1a5 Reviewed By: gchanan Differential Revision: D15278211 Pulled By: zou3519 fbshipit-source-id: 6fee981915d83e820fe8b50a8f59da22a428a9bf
Stack from ghstack:
Sets the NAMEDTENSOR_ENABLED macro (in cpp) and
tools.setup_helpers.env.NAMEDTENSOR_ENABLED.
Test Plan:
torch.__config__.show()
has -DNAMEDTENSOR_ENABLED in the CXX flags.-DNAMEDTENSOR_ENABLED
is notpresent in
torch.__config__.show()
.gh-metadata: pytorch pytorch 20162 gh/zou3519/32/head
Differential Revision: D15278211