Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nvblox with realsense or any depth camera on Jetson Xavier AGX #23

Closed
Aki1608 opened this issue Jul 5, 2022 · 18 comments
Closed

Nvblox with realsense or any depth camera on Jetson Xavier AGX #23

Aki1608 opened this issue Jul 5, 2022 · 18 comments
Assignees
Labels
verify to close Waiting on confirm issue is resolved

Comments

@Aki1608
Copy link

Aki1608 commented Jul 5, 2022

This is more like a question than an Issue. I just wanted to know if nvblox algorithm has been tested on AGX or any Xavier with realsense or any other camera to recreate a 3D environment IRL(in real life)? Or is it tested in Simulation only? Why I am asking is because when I try to recreate a 3D environment, the re-construction is not much accurate (in comparison to the Isaac Sim result.) Also, if you have tested it IRL then I will try and play with the different parameters to get better output.

@Aki1608 Aki1608 changed the title Nvblox with realsense camera on Jetson Xavier AGX Nvblox with realsense or any depth camera on Jetson Xavier AGX Jul 5, 2022
@helenol
Copy link
Collaborator

helenol commented Jul 5, 2022

We've tested it in real life on exactly that set-up. :) Unfortunately real sensors aren't as good as isaac sim, so there's quite a bit of limitation on the data quality there. Some settings in nvblox that might help:

    tsdf_integrator_max_integration_distance_m: 4.0
    tsdf_integrator_max_weight: 20.0

Setting the max distance shorter (for stereo cams/RealSense the error at larger distances from the camera gets quite large) and a lower max weight to better deal with non-static scenes.

Another thing that's quite important is that the projector should be on for the RealSense, which should greatly increase the quality of the depth cam inputs. The quality of your poses is also important. Hope that helps!

@naitiknakrani
Copy link

@helenol Do you have any comparative study or visuals showing how much 3D reconstructed scene deviates/degrades using physical cameras compared to Simulated 3D reconstruction?

Also, which manufacturer cameras have you tested ? which worked reasonably well ?

@helenol
Copy link
Collaborator

helenol commented Jul 5, 2022

@naitiknakrani Depends on the camera used/quality of the depth/structure of the scene and a million different factors. The sim input data is perfect so it's an upper-bound on real life performance.

We use the RealSense D455 which we quite like due to the wide FoV. The D435 also works well. We've also used the ZED2 camera but found that the low-texture performance (i.e., flat white walls) wasn't as good as the RS, partially due to lack of texture projector.

@naitiknakrani
Copy link

Thanks for the update.

@Aki1608
Copy link
Author

Aki1608 commented Jul 6, 2022

@helenol Thanks that was indeed helpful.

I have one question though. You said that the projector should be on.

Another thing that's quite important is that the projector should be on for the RealSense, which should greatly increase the quality of the depth cam inputs.

But when we run nvblox, it seems that it is not on. Can you tell us which parameter we have to use to turn it on? and where we have to add that parameter?

@alexmillane
Copy link
Collaborator

That depends on how you run the realsense. Which command do you use to launch the realsense, and which version of ROS2 are you running?

@naitiknakrani
Copy link

@alexmillane We are facing few challenges working with realsense. We are running it on ROS2 foxy and we are invoking realsense2_camera node by adding it into nvblox_nav2 launch files https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/tree/main/nvblox_nav2/launch. We have modified carter_sim.launch.py with its necessary parameters.

We have applied one parameter for projector on that's depth_module.emitter_enabled: true, however that ruins on-chip-calibration done before launching realsense camera node. After calibration 3D reconstruction becomes messy, and odometry degrades heavily.

@Aki1608
Copy link
Author

Aki1608 commented Jul 21, 2022

Hi @helenol,

was the new version of docker (ros2 humble) also tested with realsense camera? Were you able to access realsense camera inside the docker? We were able to install librealsense SDK and run realsense-viewer, but it cant find any device. When we open realsense-viewer, it shows error:

(handle-libusb.h:51) failed to open usb interface: 0, error: RS2_USB_STATUS_NO_DEVICE
 (sensor.cpp:572) acquire_power failed: failed to set power state
(rs.cpp:310) null pointer passed for argument "device"
(rs.cpp:2691) Couldn't refresh devices - failed to set power state

I also tried copying the 99-realsense-libusb.rules to /etc/udev/rules.d and run, sudo udevadm control --reload-rules, but it shows running in chroot, ignoring request.

@Aki1608
Copy link
Author

Aki1608 commented Jul 21, 2022

Hi @helenol, Solved this issue of the realsense camera inside the humble container. We just ran sudo udevadm control --reload-rules outside the container and now the camera is working inside the docker as well.

@alexmillane
Copy link
Collaborator

Hi @Aki1608 and @naitiknakrani. Thank you for the updates. I can confirm that we are using the realsense with quite a bit of success. We have a release coming in about a month, in which we'll include some examples and documentation about how to get it going. I guess waiting a month isn't optimal, but I hope it will be helpful when we're able to release it.

@naitiknakrani
Copy link

Thanks @alexmillane and @helenol for the response. We will be happy to see your test results with realsense.

@hemalshahNV hemalshahNV added the verify to close Waiting on confirm issue is resolved label Aug 1, 2022
@AndreV84
Copy link

@helenol @hemalshahNV
so is there any solution to get inputs from zed visualized with color information on Orin devkit? Thanks

@naitiknakrani
Copy link

@alexmillane Hi, can you please share your results and findings for the nvblox with realsense. We are doing rigorous testing with realsense, hence we would like to compare our results with the benchmarks.

Also, there is one important point I want to ask. For the nvblox, Pose estimator (pose) is an input. however, its not published by anyone and even without using "pose" as input, nvblox works. (we have tested it in Isaac Sim). So what is the intend for using the pose as input to nvblox node. If it is important from which source it should come? (i.e. from odom,vslam,imu ???)

Please share your thoughts on it.

@alexmillane
Copy link
Collaborator

Please see the example combining the realsense, vslam and nvblox which is now available.

@naitiknakrani
Copy link

naitiknakrani commented Sep 5, 2022

@alexmillane Thanks for the update. I have one small question. While using Intel Realsense, was its on-chip calibration performed? We found that everytime realsense camera is plugged in on-chip-calibration is required or else the performance was very poor. was it same at your end ?

@naitiknakrani
Copy link

What is the purpose of creating a realsense splitter node ?

@alexmillane
Copy link
Collaborator

We did not have the same experience with the calibration. We've never had to re-calibrate it from the factory settings. That seems quite strange.

Regarding the splitter. We configure the realsense to trigger the projector on/off on alternating frames (the projector is on, off, on, off, etc). Frames with the projector off are required for vslam, while the depth frames with the projector on are required for nvblox. The splitter node subscribes to the raw on/off image streams and splits the images appropriately: infra topics have projector off, and the depth topic has the projector on.

I hope that's helpful.

@naitiknakrani
Copy link

@alexmillane Thank you very much for all the information. This splitter node functionality is very much essential, because librealsense sdk v.2.50 or greater shuts off projector while doing calibration. Its an official response from Intel people. Kindly refer IntelRealSense/librealsense#10638 for the challenge.

But anyway thanks for help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
verify to close Waiting on confirm issue is resolved
Projects
None yet
Development

No branches or pull requests

7 participants