-
Notifications
You must be signed in to change notification settings - Fork 0
Add missing dependencies for Jetson Jetpack 6.2 with CUDA 12.6 #980
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
Greptile OverviewGreptile SummaryThis PR re-enables Jetson Jetpack 6.2 (CUDA 12.6) support by uncommenting and updating the Key ChangesDependency Configuration (pyproject.toml):
Lock File (uv.lock):
Documentation (README.md):
Issues IdentifiedCritical Issues:
Minor Issues:
The PR successfully enables Jetson support with proper platform-specific dependency resolution, but the source override design could affect non-Jetson ARM Linux users. Confidence Score: 2/5
Important Files ChangedFile Analysis
Sequence DiagramsequenceDiagram
participant User
participant UV as UV Package Manager
participant PyProject as pyproject.toml
participant JetsonIndex as Jetson PyPI Index<br/>(pypi.jetson-ai-lab.io)
participant StandardPyPI as Standard PyPI<br/>(pypi.org)
participant Lock as uv.lock
User->>UV: uv sync --extra jetson-jp6-cuda126
UV->>PyProject: Read package specifications
alt On aarch64 Linux Platform
PyProject->>UV: jetson-jp6-cuda126 extra requires:<br/>torch==2.8.0, torchvision==0.23.0<br/>onnxruntime-gpu==1.23.0, xformers
PyProject->>UV: Source overrides point to Jetson index<br/>for aarch64 Linux
UV->>JetsonIndex: Resolve torch 2.8.0
JetsonIndex-->>UV: torch-2.8.0-cp310-linux_aarch64.whl
UV->>JetsonIndex: Resolve torchvision 0.23.0
JetsonIndex-->>UV: torchvision-0.23.0-cp310-linux_aarch64.whl
UV->>JetsonIndex: Resolve onnxruntime-gpu 1.23.0
JetsonIndex-->>UV: onnxruntime_gpu-1.23.0-cp310-linux_aarch64.whl
UV->>JetsonIndex: Resolve xformers (no version specified)
JetsonIndex-->>UV: xformers-0.0.32+8ed0992-cp39-abi3-linux_aarch64.whl
UV->>Lock: Write dual-resolution entries
else On Other Platforms (x86_64, macOS, Windows)
PyProject->>UV: jetson-jp6-cuda126 markers don't match
Note over UV,StandardPyPI: Jetson extra is skipped<br/>Standard cuda/cpu extras used instead
UV->>StandardPyPI: Resolve torch 2.9.1
StandardPyPI-->>UV: torch-2.9.1 wheels
UV->>StandardPyPI: Resolve other dependencies
UV->>Lock: Write platform-specific entries
end
Lock-->>User: Installation complete
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 file reviewed, 1 comment
| ```bash | ||
| # CPU only (reccomended to attempt first) | ||
| pip install -e '.[cpu,dev]' | ||
| uv sync --extra cpu --extra dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
jeff owning this need to revert @jeff-hykin
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to clarify, I should revert my changes?
|
|
||
| [[tool.uv.index]] | ||
| name = "jetson-jp6-cuda126" | ||
| url = "https://pypi.jetson-ai-lab.io/jp6/cu126" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Idk what uv index does @greptile
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at your pyproject.toml changes, a UV index is essentially a custom PyPI-like package repository that UV (the Python package manager) can pull packages from.
In your case, you're adding a custom index called "jetson-jp6-cuda126" that points to https://pypi.jetson-ai-lab.io/jp6/cu126. This is NVIDIA's special package repository that hosts pre-compiled wheels specifically for Jetson devices running JetPack 6.2 with CUDA 12.6.
The reason you need this is because:
- Architecture mismatch: Regular PyPI packages are built for x86_64, but Jetson devices use ARM64 (aarch64) architecture
- CUDA compatibility: The packages need to be compiled against the specific CUDA version (12.6) that comes with JetPack 6.2
- Pre-compiled wheels: Building packages like PyTorch from source on a Jetson would take hours/days, so NVIDIA provides pre-built wheels
When UV sees dependencies like:
torch = { index = "jetson-jp6-cuda126", marker = "platform_machine == 'aarch64' and sys_platform == 'linux'" }It will:
- Check if the current platform matches the marker (ARM64 Linux)
- If yes, fetch
torchfrom the "jetson-jp6-cuda126" index instead of the default PyPI - Install the Jetson-optimized version with proper CUDA 12.6 support
This is a clean way to have platform-specific dependencies without maintaining separate requirement files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty much it allows us to have packages from multiple PyPI indices and define which index to use for a given package. These packages all come from the Jetson AI Lab Cuda 12.6 PyPI index
…dimos into fix-jetson-wheels
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4 files reviewed, 4 comments
| "torch==2.8.0 ; platform_machine == 'aarch64' and sys_platform == 'linux'", | ||
| "torchvision==0.23.0 ; platform_machine == 'aarch64' and sys_platform == 'linux'", | ||
| "onnxruntime-gpu==1.23.0 ; platform_machine == 'aarch64' and sys_platform == 'linux'", | ||
| "xformers ; platform_machine == 'aarch64' and sys_platform == 'linux'", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The xformers dependency is missing a version specifier, unlike the other three packages in this extra which have pinned versions (torch==2.8.0, torchvision==0.23.0, onnxruntime-gpu==1.23.0). This inconsistency could lead to unpredictable version resolution.
The lock file shows it resolves to xformers-0.0.32+8ed0992.d20250724, but without an explicit version constraint in pyproject.toml, future lock file regeneration might pull a different version.
Consider pinning to a specific version for consistency and reproducibility:
| "xformers ; platform_machine == 'aarch64' and sys_platform == 'linux'", | |
| "xformers==0.0.32 ; platform_machine == 'aarch64' and sys_platform == 'linux'", |
Or at minimum, add a version constraint like >=0.0.32 to prevent unexpected downgrades.
| # Jetson Jetpack 6.2 (Python 3.10 only) | ||
| uv sync --extra jetson-jp6-cuda126 --extra dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment states "(Python 3.10 only)" but the project's pyproject.toml specifies requires-python = ">=3.10", which means Python 3.11, 3.12, and 3.13 are also supported.
This creates confusion about whether the jetson-jp6-cuda126 extra truly requires Python 3.10 specifically, or if it works with higher versions.
Looking at the uv.lock file, the Jetson wheels are resolved for multiple Python versions (3.10, 3.11, 3.12, 3.13), confirming this isn't Python 3.10 only.
Consider updating to reflect the actual constraint:
| # Jetson Jetpack 6.2 (Python 3.10 only) | |
| uv sync --extra jetson-jp6-cuda126 --extra dev | |
| # Jetson Jetpack 6.2 with CUDA 12.6 (Python >=3.10) | |
| uv sync --extra jetson-jp6-cuda126 --extra dev |
| "nvidia-nvimgcodec-cu12[all]", | ||
| "onnxruntime-gpu>=1.17.1", # Only versions supporting both cuda11 and cuda12 | ||
| # Exclude on aarch64 Linux where jetson-jp6-cuda126 extra provides Jetson-specific wheels | ||
| "onnxruntime-gpu>=1.17.1 ; not (platform_machine == 'aarch64' and sys_platform == 'linux')", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The marker not (platform_machine == 'aarch64' and sys_platform == 'linux') is used here for excluding aarch64 Linux, but in the uv.lock file, the complementary marker is expressed as platform_machine != 'aarch64' or sys_platform != 'linux'.
While these are logically equivalent by De Morgan's law, this inconsistency between pyproject.toml and what gets resolved in uv.lock could cause confusion during debugging.
For consistency with the source overrides at lines 384-387 which use the positive marker form, consider rewriting this to match:
| "onnxruntime-gpu>=1.17.1 ; not (platform_machine == 'aarch64' and sys_platform == 'linux')", | |
| "onnxruntime-gpu>=1.17.1 ; platform_machine != 'aarch64' or sys_platform != 'linux'", |
This makes the codebase more consistent and easier to understand.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| torch = { index = "jetson-jp6-cuda126", marker = "platform_machine == 'aarch64' and sys_platform == 'linux'" } | ||
| torchvision = { index = "jetson-jp6-cuda126", marker = "platform_machine == 'aarch64' and sys_platform == 'linux'" } | ||
| onnxruntime-gpu = { index = "jetson-jp6-cuda126", marker = "platform_machine == 'aarch64' and sys_platform == 'linux'" } | ||
| xformers = { index = "jetson-jp6-cuda126", marker = "platform_machine == 'aarch64' and sys_platform == 'linux'" } | ||
|
|
||
| [[tool.uv.index]] | ||
| name = "jetson-jp6-cuda126" | ||
| url = "https://pypi.jetson-ai-lab.io/jp6/cu126" | ||
| explicit = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The source overrides apply globally to ALL aarch64 Linux installations, not just when the jetson-jp6-cuda126 extra is used. This means:
- If a user on aarch64 Linux installs without the jetson extra (e.g.,
uv sync --extra cpu), they won't get torch/torchvision, but if they manually install torch from PyPI, uv will try to use the Jetson index due to these overrides - These packages will ALWAYS come from the Jetson index on aarch64 Linux, even for non-Jetson ARM devices
This could cause issues for users on non-Jetson aarch64 Linux systems (like Apple Silicon running Linux VMs, AWS Graviton, or other ARM servers) who would unexpectedly get Jetson-optimized wheels that may not work on their hardware.
Consider whether the marker should be more restrictive to only apply when the jetson extra is actually requested, or document this behavior clearly. The current design means the Jetson index becomes the default source for these packages on ALL aarch64 Linux systems.
This merge request introduces the missing dependencies required to run dimensional on NVIDIA Jetson devices which support Jetpack 6.2 and CUDA 12.6. This includes the NVIDIA Jetson Orin Nano and its successors