You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @dusty-nv, as discussed in #244, i built a container using /scripts/docker_build_ros.sh --distro humble --package desktop --with-pytorch
The container was built and everything works well, except for two aspects:
The launch file is missing the codec command as highlighted in Dockerfile for ros humble #244 - I know the workaround and presume that will have to commit to save the changes in the container.
The inferencing using the libraries in jetson-inference does not seem to work. I tried the ros deep-learning command (which is the same as how i had been running successfully without the container) - I get this message: "failed to find model manifest file 'networks/models.json'.
I checked inside the jetson-inference folder (in the container) and tried running the same model (inside /python/examples folder) using ./imagenet.py /dev/video0 with my USB camera - it gave the same result and the outcome does not change if i try out other models. (Note that this set-up works fine if I were to use the jetson-inference folder natively.)
While comparing the jetson-inference folder within the container versus without a container, I find that the "data" folder inside jetson-inference folder is missing in the case of the former.
An error also comes up if i try to reload the models inside the container by cd jetson-inference/tools and then running ./download-models.sh.
I then created a /data/networks folder inside jetson-inference folder of the container. This time the models are downloaded but when i try to run the inferencing program, i get the same error.
I am not sure what exactly is going on. Note that aside from inferencing, the rest of ROS modules, such as RViz, seem to be working as expected.
As a workaround, how can I use your existing container in NGC which, as you mentioned earlier in your response, is based on the ROS-base package and then upgrade it to desktop? Please note that I would want ROS-humble-desktop with pytorch and ros_deep_learning.
Thanks
The text was updated successfully, but these errors were encountered:
@Fibo27 can you try cloning jetson-inference to your device (outside of container) and then when you run ros_deep_learning's docker/run.sh, mount the jetson-inference/data dir:
cd ros_deep_learning
docker/run.sh -v /path/to/jetson-inference/data:/jetson-inference/data
@dusty-nv to clarify (i made an error in the reference above):
ros_deep_learning with standalone jetson-inference folder works fine after the changes made per CLI Syntax for input-codec=mjpeg ros_deep_learning#121. For this installation, I used the instructions in your ros_deep_learning repo - so this is not a containerized set-up. In this set-up everything works well.
Now I am trying to create a container with humble (desktop package), pytorch and ros_deep_learning - so i am using the script in your jetson-containers repo. As you mentioned in Dockerfile for ros humble #244 the loading of ros_deep_learning module is embedded in your script. I am therefore not sure how to use the workaround mentioned above.
The installation in 1. above uses ros-foxy and my intent was to create a container with humble as I wanted to use the latest version of ros. One workaround could be to install ros-humble as a standard installation and use the ros_deep_learning package that is already working but i wanted to create a container which i can then port to other hardware set-ups.
Thank You
Hi @dusty-nv, as discussed in #244, i built a container using /scripts/docker_build_ros.sh --distro humble --package desktop --with-pytorch
The container was built and everything works well, except for two aspects:
./imagenet.py /dev/video0
with my USB camera - it gave the same result and the outcome does not change if i try out other models. (Note that this set-up works fine if I were to use the jetson-inference folder natively.)cd jetson-inference/tools
and then running./download-models.sh
.I am not sure what exactly is going on. Note that aside from inferencing, the rest of ROS modules, such as RViz, seem to be working as expected.
As a workaround, how can I use your existing container in NGC which, as you mentioned earlier in your response, is based on the ROS-base package and then upgrade it to desktop? Please note that I would want ROS-humble-desktop with pytorch and ros_deep_learning.
Thanks
The text was updated successfully, but these errors were encountered: