Skip to content

Build fails for torch2trt on JetPack 6.2.1 - ImportError: libnvdla_compiler.so not found #1218

@PPPPatrick0

Description

@PPPPatrick0

Search before asking

  • I have searched the jetson-containers issues and found no similar feature requests.

jetson-containers Component

No response

Bug

Hello,
I am encountering a build failure when attempting to run the xtts container on a Jetson AGX Orin with JetPack 6.2.1. The build process consistently fails at the step where it installs the torch2trt dependency.
The core error is an ImportError because libnvdla_compiler.so cannot be found within the Docker build environment, even though the file exists on the host system. This issue persists even after updating the repository on July 18(tokyo), 2025.

System Details:

  • Device: NVIDIA Jetson AGX Orin
  • JetPack Version: 6.2.1
  • jetson-containers Branch: master (last updated with git pull on July 18(tokyo), 2025)

Steps to Reproduce:

  • Clone the repository: git clone https://github.com/dusty-nv/jetson-containers.git
  • Navigate into the directory: cd jetson-containers
  • Attempt to run the xtts container, which triggers the build process: ./run.sh $(./autotag xtts)

Expected Behavior:

The container should build all its dependencies, including torch2trt, and launch successfully.

Actual Behavior:

The build fails during the torch2trt installation. The process exits with a non-zero code, and the final error indicates that the xtts image could not be built.

Error Log:

┌────────────────────────────────────────────────────────────┐
│ > BUILDING  xtts:r36.4.tegra-aarch64-cu126-22.04-torch2trt │
└────────────────────────────────────────────────────────────┘

DOCKER_BUILDKIT=0 docker build --network=host \
  --tag xtts:r36.4.tegra-aarch64-cu126-22.04-torch2trt \
  --file /home/hanasaki/jetson-containers/packages/pytorch/torch2trt/Dockerfile \
  --build-arg BASE_IMAGE=xtts:r36.4.tegra-aarch64-cu126-22.04-tensorrt \
   /home/hanasaki/jetson-containers/packages/pytorch/torch2trt

[20:20:11] [16/17] Building torch2trt (xtts:r36.4.tegra-aarch64-cu126-22.04-torch2trt)                                                                                               15 stages completed in 32m56s 
at 20:20:11 
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
            environment-variable.

Sending build context to Docker daemon  17.92kB
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
 ---> ced5020c100e
Step 3/5 : ADD https://api.github.com/repos/NVIDIA-AI-IOT/torch2trt/git/refs/heads/master /tmp/torch2trt_version.json

 ---> e1aff9aabcea
Step 4/5 : COPY install.sh patches/ /tmp/torch2trt/
 ---> 2863cacb37b5
Step 5/5 : RUN bash /tmp/torch2trt/install.sh
 ---> Running in 5809bce39a34
+ cd /opt
+ git clone --depth=1 https://github.com/NVIDIA-AI-IOT/torch2trt
Cloning into 'torch2trt'...
+ cd torch2trt
+ ls -R /tmp/torch2trt
/tmp/torch2trt:
flattener.py
install.sh
+ cp /tmp/torch2trt/flattener.py torch2trt
+ python3 setup.py install --plugins
Traceback (most recent call last):
  File "/opt/torch2trt/setup.py", line 2, in <module>
    import tensorrt
  File "/usr/local/lib/python3.10/dist-packages/tensorrt/__init__.py", line 76, in <module>
    from .tensorrt import *
ImportError: libnvdla_compiler.so: cannot open shared object file: No such file or directory
The command '/bin/sh -c bash /tmp/torch2trt/install.sh' returned a non-zero code: 1
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/hanasaki/jetson-containers/jetson_containers/tag.py", line 58, in <module>
    image = find_container(args.packages[0], prefer_sources=args.prefer, disable_sources=args.disable, user=args.user, quiet=args.quiet)
  File "/home/hanasaki/jetson-containers/jetson_containers/container.py", line 638, in find_container
    return build_container('', package) #, simulate=True)
  File "/home/hanasaki/jetson-containers/jetson_containers/container.py", line 225, in build_container
    status = subprocess.run(cmd.replace(_NEWLINE_, ' '), executable='/bin/bash', shell=True, check=True)
  File "/usr/lib/python3.10/subprocess.py", line 526, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'DOCKER_BUILDKIT=0 docker build --network=host   --tag xtts:r36.4.tegra-aarch64-cu126-22.04-torch2trt   --file /home/hanasaki/jetson-containers/packages/pytorch/torch2trt/D
ockerfile   --build-arg BASE_IMAGE=xtts:r36.4.tegra-aarch64-cu126-22.04-tensorrt    /home/hanasaki/jetson-containers/packages/pytorch/torch2trt 2>&1 | tee /home/hanasaki/jetson-containers/logs/20250718_194710/bu
ild/16o17_xtts_r36.4.tegra-aarch64-cu126-22.04-torch2trt.txt; exit ${PIPESTATUS[0]}' returned non-zero exit status 1.
-- Error:  return code 1
V4L2_DEVICES: 
### DISPLAY environmental variable is already set: ":0"
localuser:root being added to access control list
### ARM64 architecture detected
### Jetson Detected
SYSTEM_ARCH=tegra-aarch64
+ docker run --runtime nvidia --env NVIDIA_DRIVER_CAPABILITIES=compute,utility,graphics -it --rm --network host --shm-size=8g --volume /tmp/argus_socket:/tmp/argus_socket --volume /etc/enctune.conf:/etc/enctune.
conf --volume /etc/nv_tegra_release:/etc/nv_tegra_release --volume /tmp/nv_jetson_model:/tmp/nv_jetson_model --volume /var/run/dbus:/var/run/dbus --volume /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socke
t --volume /var/run/docker.sock:/var/run/docker.sock --volume /home/hanasaki/jetson-containers/data:/data -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro --device /dev/snd -e PULSE_SERVER=u
nix:/run/user/1000/pulse/native -v /run/user/1000/pulse:/run/user/1000/pulse --device /dev/bus/usb -e DISPLAY=:0 -v /tmp/.X11-unix/:/tmp/.X11-unix -v /tmp/.docker.xauth:/tmp/.docker.xauth -e XAUTHORITY=/tmp/.doc
ker.xauth --device /dev/i2c-0 --device /dev/i2c-1 --device /dev/i2c-2 --device /dev/i2c-3 --device /dev/i2c-4 --device /dev/i2c-5 --device /dev/i2c-6 --device /dev/i2c-7 --device /dev/i2c-8 --device /dev/i2c-9 -
-name jetson_container_20250718_202021
docker: 'docker run' requires at least 1 argument

Usage:  docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

See 'docker run --help' for more information

Additional Diagnostic Information:

I have confirmed that the required library libnvdla_compiler.so does exist on the host machine. This strongly suggests that the library is not being correctly mounted or otherwise made available to the Docker build environment when the torch2trt Dockerfile is executed.

Host Command Run:

sudo find / -name libnvdla_compiler.so

Host Command Output:

/usr/lib/aarch64-linux-gnu/nvidia/libnvdla_compiler.so

This appears to be a bug in the build scripts for JetPack 6.2.1. Any guidance on how to resolve this would be greatly appreciated.

Thank you!

Translation generated by Gemini.

Environment

System Details:

  • Device: NVIDIA Jetson AGX Orin
  • JetPack Version: 6.2.1
  • jetson-containers Branch: master (last updated with git pull on July 18, 2025)

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions