Skip to content

Conversation

@narendasan
Copy link
Collaborator

…or folks locally

Description

The lock file had a stale version of torch in it and some symbols got shifted around. Caused the C++ build to use latest but in Python an older version would get pulled and prevent import.

@lanluo-nvidia can we add that job to update the lock file regularly?

Fixes # (issue)

Type of change

Please delete options that are not relevant and/or add your own.

  • Bug fix (non-breaking change which fixes an issue)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@meta-cla meta-cla bot added the cla signed label Dec 5, 2025
@github-actions github-actions bot added the component: build system Issues re: Build system label Dec 5, 2025
@narendasan
Copy link
Collaborator Author

narendasan commented Dec 5, 2025

@lanluo-nvidia Also I removed the Jetson index since Thor IIRC is now on SBSAs depset, so that tegra flag will not work anyway. Makes the deps simpler but we need to revisit how we tell people how to build for Jetson Orin

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/setup.py	2025-12-08 19:05:24.566697+00:00
+++ /home/runner/work/TensorRT/TensorRT/setup.py	2025-12-08 19:06:06.393116+00:00
@@ -742,13 +742,11 @@
        requirements = get_sbsa_requirements()
    else:
        # standard linux and windows requirements
        requirements = base_requirements + ["numpy"]
        if not IS_DLFW_CI:
-            requirements = requirements + [
-                "torch>=2.10.0.dev,<2.11.0"
-            ]
+            requirements = requirements + ["torch>=2.10.0.dev,<2.11.0"]
            if USE_TRT_RTX:
                requirements = requirements + [
                    "tensorrt_rtx>=1.2.0.54",
                ]
            else:

@narendasan narendasan force-pushed the push-lonmltykqqxx branch 2 times, most recently from a0bbded to 37016b3 Compare December 8, 2025 20:52
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/setup.py	2025-12-08 20:52:24.151308+00:00
+++ /home/runner/work/TensorRT/TensorRT/setup.py	2025-12-08 20:53:14.248219+00:00
@@ -723,10 +723,11 @@
    )

with open(os.path.join(get_root_dir(), "README.md"), "r", encoding="utf-8") as fh:
    long_description = fh.read()

+
def get_jetpack_requirements(base_requirements):
    requirements = base_requirements + ["numpy<2.0.0"]
    if IS_DLFW_CI:
        return requirements
    else:
@@ -743,14 +744,13 @@
        return requirements + [
            "torch>=2.10.0.dev,<2.11.0",
            "tensorrt>=10.14.1,<10.15.0",
        ]

+
def get_x86_64_requirements(base_requirements):
-    requirements = base_requirements + [
-        "numpy"
-    ]
+    requirements = base_requirements + ["numpy"]

    if IS_DLFW_CI:
        return requirements
    else:
        requirements = requirements + ["torch>=2.10.0.dev,<2.11.0"]
@@ -782,10 +782,11 @@
            else:
                raise ValueError(f"Unsupported CUDA version: {cuda_version}")

            return requirements

+
def get_requirements():
    base_requirements = [
        "packaging>=23",
        "typing-extensions>=4.7.0",
        "dllist",
@@ -800,10 +801,11 @@
        requirements = get_sbsa_requirements(base_requirements)
    else:
        # standard linux and windows requirements
        requirements = get_x86_64_requirements(base_requirements)
    return requirements
+

setup(
    name="torch_tensorrt",
    ext_modules=ext_modules,
    version=__version__,

@lanluo-nvidia
Copy link
Collaborator

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants