-
Notifications
You must be signed in to change notification settings - Fork 373
chore: Update lock file, was getting stuck and causing build issues f… #3948
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@lanluo-nvidia Also I removed the Jetson index since Thor IIRC is now on SBSAs depset, so that tegra flag will not work anyway. Makes the deps simpler but we need to revisit how we tell people how to build for Jetson Orin |
9fbd94c to
578ff09
Compare
578ff09 to
2dcf9f5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/setup.py 2025-12-08 19:05:24.566697+00:00
+++ /home/runner/work/TensorRT/TensorRT/setup.py 2025-12-08 19:06:06.393116+00:00
@@ -742,13 +742,11 @@
requirements = get_sbsa_requirements()
else:
# standard linux and windows requirements
requirements = base_requirements + ["numpy"]
if not IS_DLFW_CI:
- requirements = requirements + [
- "torch>=2.10.0.dev,<2.11.0"
- ]
+ requirements = requirements + ["torch>=2.10.0.dev,<2.11.0"]
if USE_TRT_RTX:
requirements = requirements + [
"tensorrt_rtx>=1.2.0.54",
]
else:a0bbded to
37016b3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/setup.py 2025-12-08 20:52:24.151308+00:00
+++ /home/runner/work/TensorRT/TensorRT/setup.py 2025-12-08 20:53:14.248219+00:00
@@ -723,10 +723,11 @@
)
with open(os.path.join(get_root_dir(), "README.md"), "r", encoding="utf-8") as fh:
long_description = fh.read()
+
def get_jetpack_requirements(base_requirements):
requirements = base_requirements + ["numpy<2.0.0"]
if IS_DLFW_CI:
return requirements
else:
@@ -743,14 +744,13 @@
return requirements + [
"torch>=2.10.0.dev,<2.11.0",
"tensorrt>=10.14.1,<10.15.0",
]
+
def get_x86_64_requirements(base_requirements):
- requirements = base_requirements + [
- "numpy"
- ]
+ requirements = base_requirements + ["numpy"]
if IS_DLFW_CI:
return requirements
else:
requirements = requirements + ["torch>=2.10.0.dev,<2.11.0"]
@@ -782,10 +782,11 @@
else:
raise ValueError(f"Unsupported CUDA version: {cuda_version}")
return requirements
+
def get_requirements():
base_requirements = [
"packaging>=23",
"typing-extensions>=4.7.0",
"dllist",
@@ -800,10 +801,11 @@
requirements = get_sbsa_requirements(base_requirements)
else:
# standard linux and windows requirements
requirements = get_x86_64_requirements(base_requirements)
return requirements
+
setup(
name="torch_tensorrt",
ext_modules=ext_modules,
version=__version__,|
LGTM |
37016b3 to
fcbdd8d
Compare
…or folks locally
Description
The lock file had a stale version of torch in it and some symbols got shifted around. Caused the C++ build to use latest but in Python an older version would get pulled and prevent import.
@lanluo-nvidia can we add that job to update the lock file regularly?
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: