Skip to content

Conversation

@Kaweees
Copy link
Member

@Kaweees Kaweees commented Jan 12, 2026

This merge request introduces the missing dependencies required to run dimensional on NVIDIA Jetson devices which support Jetpack 6.2 and CUDA 12.6. This includes the NVIDIA Jetson Orin Nano and its successors

@Kaweees Kaweees requested a review from a team January 12, 2026 06:09
@greptile-apps
Copy link

greptile-apps bot commented Jan 12, 2026

Greptile Overview

Greptile Summary

This PR re-enables Jetson Jetpack 6.2 (CUDA 12.6) support by uncommenting and updating the jetson-jp6-cuda126 optional dependency group that was previously disabled due to 404 errors.

Key Changes

Dependency Configuration (pyproject.toml):

  • Uncommented jetson-jp6-cuda126 extra with pinned versions: torch==2.8.0, torchvision==0.23.0, onnxruntime-gpu==1.23.0, and unpinned xformers
  • Added source overrides in [tool.uv.sources] to redirect these packages to the Jetson index for aarch64 Linux systems
  • Excluded onnxruntime-gpu and xformers from the cuda extra on aarch64 Linux to avoid conflicts
  • Configured explicit Jetson PyPI index at https://pypi.jetson-ai-lab.io/jp6/cu126

Lock File (uv.lock):

  • Dual package resolution: Jetson-specific wheels for aarch64 Linux, standard PyPI wheels for other platforms
  • Resolution markers reordered to prioritize aarch64 Linux variants

Documentation (README.md):

  • Added installation command: uv sync --extra jetson-jp6-cuda126 --extra dev
  • Changed from pip install to uv sync commands for consistency

Issues Identified

Critical Issues:

  1. Missing xformers version specification - Unlike other Jetson dependencies which are pinned, xformers has no version constraint, risking unpredictable resolution
  2. Overly broad source overrides - The source overrides apply to ALL aarch64 Linux systems (not just Jetson devices), meaning users on AWS Graviton, Apple Silicon Linux VMs, or other ARM servers will unexpectedly get Jetson-optimized wheels that may not work

Minor Issues:

  1. Misleading Python version documentation - README states "Python 3.10 only" but the project supports Python >=3.10
  2. Marker syntax inconsistency - Different negative marker forms used between pyproject.toml and what resolves in uv.lock

The PR successfully enables Jetson support with proper platform-specific dependency resolution, but the source override design could affect non-Jetson ARM Linux users.

Confidence Score: 2/5

  • This PR has significant architectural concerns that could affect non-Jetson ARM Linux users
  • Score reflects critical issues: (1) missing xformers version specification creates unpredictability, (2) overly broad source overrides that apply to ALL aarch64 Linux systems (not just Jetson devices) could cause installation failures on AWS Graviton, Apple Silicon Linux VMs, and other ARM servers. While the Jetson-specific functionality appears correct, the global nature of the source overrides is a design flaw that needs addressing before merge.
  • Pay close attention to pyproject.toml lines 384-392 (source overrides) and line 266 (xformers version)

Important Files Changed

File Analysis

Filename Score Overview
README.md 4/5 Updated installation instructions for Jetson support; minor issue with Python version constraint documentation
pyproject.toml 2/5 Re-enabled jetson-jp6-cuda126 extra with pinned versions; critical issues with missing xformers version and overly broad source overrides affecting all aarch64 Linux systems
uv.lock 4/5 Lock file updated with dual resolution for Jetson and standard PyPI packages; resolution markers properly handle platform-specific dependencies

Sequence Diagram

sequenceDiagram
    participant User
    participant UV as UV Package Manager
    participant PyProject as pyproject.toml
    participant JetsonIndex as Jetson PyPI Index<br/>(pypi.jetson-ai-lab.io)
    participant StandardPyPI as Standard PyPI<br/>(pypi.org)
    participant Lock as uv.lock

    User->>UV: uv sync --extra jetson-jp6-cuda126
    UV->>PyProject: Read package specifications
    
    alt On aarch64 Linux Platform
        PyProject->>UV: jetson-jp6-cuda126 extra requires:<br/>torch==2.8.0, torchvision==0.23.0<br/>onnxruntime-gpu==1.23.0, xformers
        PyProject->>UV: Source overrides point to Jetson index<br/>for aarch64 Linux
        UV->>JetsonIndex: Resolve torch 2.8.0
        JetsonIndex-->>UV: torch-2.8.0-cp310-linux_aarch64.whl
        UV->>JetsonIndex: Resolve torchvision 0.23.0
        JetsonIndex-->>UV: torchvision-0.23.0-cp310-linux_aarch64.whl
        UV->>JetsonIndex: Resolve onnxruntime-gpu 1.23.0
        JetsonIndex-->>UV: onnxruntime_gpu-1.23.0-cp310-linux_aarch64.whl
        UV->>JetsonIndex: Resolve xformers (no version specified)
        JetsonIndex-->>UV: xformers-0.0.32+8ed0992-cp39-abi3-linux_aarch64.whl
        UV->>Lock: Write dual-resolution entries
    else On Other Platforms (x86_64, macOS, Windows)
        PyProject->>UV: jetson-jp6-cuda126 markers don't match
        Note over UV,StandardPyPI: Jetson extra is skipped<br/>Standard cuda/cpu extras used instead
        UV->>StandardPyPI: Resolve torch 2.9.1
        StandardPyPI-->>UV: torch-2.9.1 wheels
        UV->>StandardPyPI: Resolve other dependencies
        UV->>Lock: Write platform-specific entries
    end
    
    Lock-->>User: Installation complete
Loading

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

@Kaweees Kaweees self-assigned this Jan 12, 2026
```bash
# CPU only (reccomended to attempt first)
pip install -e '.[cpu,dev]'
uv sync --extra cpu --extra dev
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jeff owning this need to revert @jeff-hykin

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify, I should revert my changes?


[[tool.uv.index]]
name = "jetson-jp6-cuda126"
url = "https://pypi.jetson-ai-lab.io/jp6/cu126"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idk what uv index does @greptile

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at your pyproject.toml changes, a UV index is essentially a custom PyPI-like package repository that UV (the Python package manager) can pull packages from.

In your case, you're adding a custom index called "jetson-jp6-cuda126" that points to https://pypi.jetson-ai-lab.io/jp6/cu126. This is NVIDIA's special package repository that hosts pre-compiled wheels specifically for Jetson devices running JetPack 6.2 with CUDA 12.6.

The reason you need this is because:

  1. Architecture mismatch: Regular PyPI packages are built for x86_64, but Jetson devices use ARM64 (aarch64) architecture
  2. CUDA compatibility: The packages need to be compiled against the specific CUDA version (12.6) that comes with JetPack 6.2
  3. Pre-compiled wheels: Building packages like PyTorch from source on a Jetson would take hours/days, so NVIDIA provides pre-built wheels

When UV sees dependencies like:

torch = { index = "jetson-jp6-cuda126", marker = "platform_machine == 'aarch64' and sys_platform == 'linux'" }

It will:

  1. Check if the current platform matches the marker (ARM64 Linux)
  2. If yes, fetch torch from the "jetson-jp6-cuda126" index instead of the default PyPI
  3. Install the Jetson-optimized version with proper CUDA 12.6 support

This is a clean way to have platform-specific dependencies without maintaining separate requirement files.

Copy link
Member Author

@Kaweees Kaweees Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty much it allows us to have packages from multiple PyPI indices and define which index to use for a given package. These packages all come from the Jetson AI Lab Cuda 12.6 PyPI index

@Kaweees
Copy link
Member Author

Kaweees commented Jan 13, 2026

@greptile

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4 files reviewed, 4 comments

Edit Code Review Agent Settings | Greptile

"torch==2.8.0 ; platform_machine == 'aarch64' and sys_platform == 'linux'",
"torchvision==0.23.0 ; platform_machine == 'aarch64' and sys_platform == 'linux'",
"onnxruntime-gpu==1.23.0 ; platform_machine == 'aarch64' and sys_platform == 'linux'",
"xformers ; platform_machine == 'aarch64' and sys_platform == 'linux'",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The xformers dependency is missing a version specifier, unlike the other three packages in this extra which have pinned versions (torch==2.8.0, torchvision==0.23.0, onnxruntime-gpu==1.23.0). This inconsistency could lead to unpredictable version resolution.

The lock file shows it resolves to xformers-0.0.32+8ed0992.d20250724, but without an explicit version constraint in pyproject.toml, future lock file regeneration might pull a different version.

Consider pinning to a specific version for consistency and reproducibility:

Suggested change
"xformers ; platform_machine == 'aarch64' and sys_platform == 'linux'",
"xformers==0.0.32 ; platform_machine == 'aarch64' and sys_platform == 'linux'",

Or at minimum, add a version constraint like >=0.0.32 to prevent unexpected downgrades.

Comment on lines +107 to +108
# Jetson Jetpack 6.2 (Python 3.10 only)
uv sync --extra jetson-jp6-cuda126 --extra dev
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment states "(Python 3.10 only)" but the project's pyproject.toml specifies requires-python = ">=3.10", which means Python 3.11, 3.12, and 3.13 are also supported.

This creates confusion about whether the jetson-jp6-cuda126 extra truly requires Python 3.10 specifically, or if it works with higher versions.

Looking at the uv.lock file, the Jetson wheels are resolved for multiple Python versions (3.10, 3.11, 3.12, 3.13), confirming this isn't Python 3.10 only.

Consider updating to reflect the actual constraint:

Suggested change
# Jetson Jetpack 6.2 (Python 3.10 only)
uv sync --extra jetson-jp6-cuda126 --extra dev
# Jetson Jetpack 6.2 with CUDA 12.6 (Python >=3.10)
uv sync --extra jetson-jp6-cuda126 --extra dev

"nvidia-nvimgcodec-cu12[all]",
"onnxruntime-gpu>=1.17.1", # Only versions supporting both cuda11 and cuda12
# Exclude on aarch64 Linux where jetson-jp6-cuda126 extra provides Jetson-specific wheels
"onnxruntime-gpu>=1.17.1 ; not (platform_machine == 'aarch64' and sys_platform == 'linux')",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The marker not (platform_machine == 'aarch64' and sys_platform == 'linux') is used here for excluding aarch64 Linux, but in the uv.lock file, the complementary marker is expressed as platform_machine != 'aarch64' or sys_platform != 'linux'.

While these are logically equivalent by De Morgan's law, this inconsistency between pyproject.toml and what gets resolved in uv.lock could cause confusion during debugging.

For consistency with the source overrides at lines 384-387 which use the positive marker form, consider rewriting this to match:

Suggested change
"onnxruntime-gpu>=1.17.1 ; not (platform_machine == 'aarch64' and sys_platform == 'linux')",
"onnxruntime-gpu>=1.17.1 ; platform_machine != 'aarch64' or sys_platform != 'linux'",

This makes the codebase more consistent and easier to understand.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines +384 to +392
torch = { index = "jetson-jp6-cuda126", marker = "platform_machine == 'aarch64' and sys_platform == 'linux'" }
torchvision = { index = "jetson-jp6-cuda126", marker = "platform_machine == 'aarch64' and sys_platform == 'linux'" }
onnxruntime-gpu = { index = "jetson-jp6-cuda126", marker = "platform_machine == 'aarch64' and sys_platform == 'linux'" }
xformers = { index = "jetson-jp6-cuda126", marker = "platform_machine == 'aarch64' and sys_platform == 'linux'" }

[[tool.uv.index]]
name = "jetson-jp6-cuda126"
url = "https://pypi.jetson-ai-lab.io/jp6/cu126"
explicit = true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The source overrides apply globally to ALL aarch64 Linux installations, not just when the jetson-jp6-cuda126 extra is used. This means:

  1. If a user on aarch64 Linux installs without the jetson extra (e.g., uv sync --extra cpu), they won't get torch/torchvision, but if they manually install torch from PyPI, uv will try to use the Jetson index due to these overrides
  2. These packages will ALWAYS come from the Jetson index on aarch64 Linux, even for non-Jetson ARM devices

This could cause issues for users on non-Jetson aarch64 Linux systems (like Apple Silicon running Linux VMs, AWS Graviton, or other ARM servers) who would unexpectedly get Jetson-optimized wheels that may not work on their hardware.

Consider whether the marker should be more restrictive to only apply when the jetson extra is actually requested, or document this behavior clearly. The current design means the Jetson index becomes the default source for these packages on ALL aarch64 Linux systems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants