Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document and support installation of software that depends on librealsense #564

Open
traversaro opened this issue Dec 17, 2020 · 13 comments

Comments

@traversaro
Copy link
Member

The Intel realsense devices are quite used in the robotics world, and there are plan to include them as an add-on on the iCub robot.
However, the necessary software dependencies are not available from standard repos on Debian/Ubuntu, so it could make sense to have a dedicate profile of optional dependency (see https://github.com/robotology/robotology-superbuild/blob/master/doc/profiles.md) for them.

@traversaro
Copy link
Member Author

traversaro commented Dec 17, 2020

The situation seems to be easy on Windows and macOS: librealsense is a library that is available in vcpkg and in homebrew:

The situation is more tricky on Linux:

So, before deciding on the best strategies, I have two questions to realsense users:

  • Q1: Which strategies are you currently using to install librealsense on Ubuntu? From source, Intel official non-ROS repo or ROS repo?
  • Q2: Are you aware if the udev and dkms integration are useful or we can also think of compiling librealsense from source and not use them?

@Nicogene @xEnVrE @prashanthr05 @lnobile @vvasco (or any other realsense user) I think that if you could answer the questions Q1 & Q2 it would be quite useful!

@xEnVrE
Copy link
Collaborator

xEnVrE commented Dec 17, 2020

Hi @traversaro,

I'll try to answer Q1 according to my experience.

First of all, I tend to compile the library from source because it seems that the default build options (hence the same I am expecting the package is compiled with e.g. in the provided deb packages) require patches to the driver itself in order to support the Linux kernel and tend to produce unreliable behaviors (e.g. you run the yarpdev for the camera and it closes down after a while reporting, librealsense-wise, that frames were not available for a pre-defined maximum amount of seconds). In this case you end up disconnecting and connecting back the camera, until it works. I want to stress that this is not an issue from the yarpdev side.

Instead, what I found to be working more reliably is to compile from source specifying the option FORCE_RSUSB_BACKEND=ON in CMake. If I am not wrong, the so-called RealSense USB Backend is the tentative from Intel to support linux Kernels without any patch to the driver (if I am not wrong, it basically avoids relying on video4linux2 API kernel-side and uses libusb + libuvc user space-side instead). In the past, I used this compile option also to solve other annoying bugs e.g. images freezing every so often while using the camera. With this option I also avoided the aforementioned issue where the driver basically does not receive frames for more than a predefined time and then closes itself.

Some warnings:

@xEnVrE
Copy link
Collaborator

xEnVrE commented Dec 17, 2020

I wanted to add that compiling from sources also enable us to decide other two important things. Since we typically use RGB + Depth (and we need to align them) and since the alignment process can be CPU consuming, the librealsense driver allows using parallelized version of the alignment process:

  • one uses OpenMP CPU-side (and can be enabled with a specific option in CMake)
  • one uses CUDA GPU-side (and that too can be enabled with a specific option in CMake)

Honestly, I don't know what are the default build options adopted in the provided deb packages. Since not everybody is using CUDA and since the OpenMP implementation could lead to very good performance (at the expense of a really high CPU usage) I think that both are OFF by default. Being able to decide which to use from the superbuild would be awesome!

@prashanthr05
Copy link
Contributor

Q1: Which strategies are you currently using to install librealsense on Ubuntu? From source, Intel official non-ROS repo or ROS repo?

On my local machine, I installed it from the source. In this case in order to setup the udev rules I was prompted to run the script ./scripts/setup_udev_rules.sh when trying to use the device with the realsense-viewer. I don't know if I did the dkms part. Checking my /etc/sources.list.d, looks like I have not added the realsense server to the list of repositories.

@Nicogene
Copy link
Member

Q1
In general, I follow these instructions https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages
Only when there were special configurations or issues I compiled the sdk from sources( same as #564 (comment))

Q2
I think they are mandatory for using the physical devices, but I never tried to use them without installing them

@pattacini
Copy link
Member

Q1

If I'm not mistaken, we did follow https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages. Is that correct @vvasco?

Q2

Never been concerned with this. Perhaps @vvasco has some insights on this.

@S-Dafarra
Copy link
Collaborator

I wanted to add that compiling from sources also enable us to decide other two important things. Since we typically use RGB + Depth (and we need to align them) and since the alignment process can be CPU consuming, the librealsense driver allows using parallelized version of the alignment process:

  • one uses OpenMP CPU-side (and can be enabled with a specific option in CMake)
  • one uses CUDA GPU-side (and that too can be enabled with a specific option in CMake)

Honestly, I don't know what are the default build options adopted in the provided deb packages. Since not everybody is using CUDA and since the OpenMP implementation could lead to very good performance (at the expense of a really high CPU usage) I think that both are OFF by default. Being able to decide which to use from the superbuild would be awesome!

In your experience, would it be possible to run the expensive computation in a machine different from the one to which the sensor is physically attached? In other words, is it possible to stream the raw USB output to another machine in the network?

@xEnVrE
Copy link
Collaborator

xEnVrE commented Dec 17, 2020

In your experience, would it be possible to run the expensive computation in a machine different from the one to which the sensor is physically attached? In other words, is it possible to stream the raw USB output to another machine in the network?

I don't know if that is possible but I think it is not (with the standard pipeline). Anyway, please consider that if you only need depth imaging, you can disable the alignment process (it can be easily done from the configuration file of the associated yarpdev).

@xEnVrE
Copy link
Collaborator

xEnVrE commented Dec 17, 2020

@S-Dafarra starting from 2.34.0 there is also the support for the so-called RealSense Device over Ethernet (see https://github.com/IntelRealSense/librealsense/wiki/Release-Notes#release-2340). I think it allows you to compress data from the machine where the camera is physically attached (and a rs-server is running) and start the rs2 pipeline on another machine on the same network. In this case there is no support from the yarpdev device though, you will need to write your own code. But if you don't need special things, the realsense yarpdev is of course the right solution for streaming rgb and depth data from the camera over the network via YARP ports.

@traversaro
Copy link
Member Author

traversaro commented Dec 20, 2020

Thanks a lot to everyone!

I will try to summarize what I learned, also w.r.t. to Q2.

On Linux librealsense has two backends:

  • The Video4Linux, that is the one official supported but that requires kernel patches to the v4l module, that is handled by the DKMS packages by the official .deb package. This is the backend used by the official .deb packages.
  • The USB video device class, that is not "officially" supported and do not support the use case of multiple camera synchronized, but it can work fine also for non-Linux OS and for Linux distributions for which there is no support for patching the Kernel. This is the backend enable by FORCE_RSUSB_BACKEND option, and use in the ROS binary packages.

An in-depth description of this two options is described in IntelRealSense/librealsense#5212 (comment) .

Regarding udev rules, I am still not sure what and when they are actually needed. However even when librealsense is installed from source, the udev rules can be installed separately (and we could document that, once we understand how to do that properly). Related to that, it seems that for now udev rules are not included in ROS packages of librealsense, see IntelRealSense/realsense-ros#1426 .

Based on this and by the feedback by @xEnVrE, by personal inclination is the following:

Let me know if you think it make sense or that we should do something else, thanks!

@traversaro
Copy link
Member Author

traversaro commented Jun 23, 2021

On Linux librealsense has two backends:

  • The Video4Linux, that is the one official supported but that requires kernel patches to the v4l module, that is handled by the DKMS packages by the official .deb package. This is the backend used by the official .deb packages.
  • The USB video device class, that is not "officially" supported and do not support the use case of multiple camera synchronized, but it can work fine also for non-Linux OS and for Linux distributions for which there is no support for patching the Kernel. This is the backend enable by FORCE_RSUSB_BACKEND option, and use in the ROS binary packages.

Note that we recently discussed about the possible use of multiple realsense cameras attached to the same machine in the context of the ergoCub project, so for the future we may need to consider supporting the use of the Video4Linux backend.
fyi @randaz81 @DatSpace @DanielePucci @pattacini @xEnVrE

@pattacini
Copy link
Member

Tagging @triccyx who had worked on the same backend for the UltraPython.

@traversaro
Copy link
Member Author

To resume the issue, I guess that the basic point is that users may have many different ways of installing librealsense, so the easy thing is just to discuss one way in the docs, and avoid to build librealsense in the superbuild itself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants