Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase RCVBUF_SIZE to prevent package loss #247

Closed
tik0 opened this issue Jun 14, 2023 · 9 comments · Fixed by #249
Closed

Increase RCVBUF_SIZE to prevent package loss #247

tik0 opened this issue Jun 14, 2023 · 9 comments · Fixed by #249
Assignees
Labels
bug Something isn't working enhancement New feature or request

Comments

@tik0
Copy link

tik0 commented Jun 14, 2023

We experience package losses when it comes to some system load on our vanilla Ubuntu 20.04.
We reproducibly tested it with maxing out the system's load with $ yes while running:

roslaunch ouster_lidar sensor.launch sensor_hostname:=$OUSTER_OS1_SENSOR_HOSTNAME udp_dest:=$OUSTER_OS1_UDP_DESTINATION lidar_port:=$OUSTER_OS1_UDP_PORT_LIDAR 

The applications were compiled with -O3 and release flags beforehand.
However, adapting the system's rmem_default and rmem_max to e.g., 20MB did not help, since the ouster_client sets the receive-buffer explicitly to 256kB in the client.cpp

Increasing RCVBUF_SIZE to 20MB as well (this value was just a random guess BTW!) solved all our problems and we don't experience any frame drops from now on.

However, I think that just increasing RCVBUF_SIZE is a bad practice.
I think, that this value should be exposed as an argument such that it can be configured concerning the user's demand.

What do you think?

Example with package loss:
image

Example with increased RCVBUF_SIZE buffer and no package loss:
image

@Samahu
Copy link
Contributor

Samahu commented Jun 14, 2023

Definitely something to try and see if it would actually help in all scenarios that users complain about packets drop. May I ask though which ROS version are you currently using?

@Samahu Samahu self-assigned this Jun 14, 2023
@tik0
Copy link
Author

tik0 commented Jun 14, 2023

ROS Noetic. All the other configurations are pretty much vanilla.

@Samahu
Copy link
Contributor

Samahu commented Jun 15, 2023

Thanks, before trying your suggestion I would want to try out some of the changes that I implemented here and see if that helps with the issue. Unfortunately, the changes at this moment target the ROS2 branch but soon I will port some of these improvements to ROS1. I will update you on that front.

@Samahu Samahu added the enhancement New feature or request label Jun 15, 2023
@tik0
Copy link
Author

tik0 commented Jun 15, 2023

Sure, but I think that an adaptable or increased receive buffer is inevitable. We have a quite solid hardware setup (core i7 6000, 64 GB Ram, Intel 550 NIC) and even get package losses when we record the Ouster's UDP datastream with tools like tshark when we introduce load to the system. Only increasing tshark's standard 2MB buffer prevents package losses. Maybe the Ouster's sending behavior utilizes the connection too much. It makes 30MByte/s and each frame seems to be send in burst mode.

@Samahu
Copy link
Contributor

Samahu commented Jun 16, 2023

Sure, but I think that an adaptable or increased receive buffer is inevitable. We have a quite solid hardware setup (core i7 6000, 64 GB Ram, Intel 550 NIC) and even get package losses when we record the Ouster's UDP datastream with tools like tshark when we introduce load to the system. Only increasing tshark's standard 2MB buffer prevents package losses. Maybe the Ouster's sending behavior utilizes the connection too much. It makes 30MByte/s and each frame seems to be send in burst mode.

I agree, I just wanted to check if packet loses are due to driver not polling data from the buffer fast enough. In the current ROS1 branch we poll the data using a ROS timer and publish the packets at the same time. So the pace at which we poll from the client is dependent on ROS. The PR I put out last week (currently only for ROS2) removes this dependency and it ensures we constantly poll from the client. But yeah I would still want to look into increasing recv buffer size and what size to use a default.

@KenthJohan
Copy link

I also had problems with packet loss in Ouster Studio and tshark. I got frustrated and just build a custom udp capture tool just with recv + fwrite for capturing and fread+sendto for replaying.

@Samahu
Copy link
Contributor

Samahu commented Oct 31, 2023

buffer increase included PR ouster-lidar/ouster-sdk#565

@Samahu Samahu transferred this issue from ouster-lidar/ouster-sdk Nov 1, 2023
@Samahu Samahu added the bug Something isn't working label Nov 1, 2023
@tik0
Copy link
Author

tik0 commented Nov 6, 2023

I also had problems with packet loss in Ouster Studio and tshark. I got frustrated and just build a custom udp capture tool just with recv + fwrite for capturing and fread+sendto for replaying.

Hi @KenthJohan , this sounds nice. What other optimizations have you done to the system? Have you increased the OS's socket buffers or NIC ring buffers?

@KenthJohan
Copy link

I also had problems with packet loss in Ouster Studio and tshark. I got frustrated and just build a custom udp capture tool just with recv + fwrite for capturing and fread+sendto for replaying.

Hi @KenthJohan , this sounds nice. What other optimizations have you done to the system? Have you increased the OS's socket buffers or NIC ring buffers?

Hi sorry for late answer. I have not touched Ubuntu default network settings. But there is a difference between UDP recv and tshark. tshark captures at data link layer (2), UDP recv captures at transport layer (4). I believe more things can go wrong when using tshark because its capturing more layers and headers where as UDP recv only captures size and data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants