Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use for long continuous SLAM #43

Closed
Jaetriel opened this issue Jul 19, 2021 · 12 comments
Closed

Use for long continuous SLAM #43

Jaetriel opened this issue Jul 19, 2021 · 12 comments

Comments

@Jaetriel
Copy link

Hey, thank you again for the amazing work that you guys have put in! I have a couple of questions about performance and optimization of the algorithm during longer periods of scanning. I noticed that my RAM usage goes up higher and higher each second the map is being populated, and after a while I suppose it would overflow and crash the program ( I have 32GB on my machine so it would take a few minutes but still), is there any way to stop this or at least clear some of the scans after a certain time so we can free some RAM? I'm guessing this is mainly due to RViz as well using a lot to visualize the whole map with that many points (my lidar has 128 channels so I get around 2mil points per second), which is my second question as well. I'm trying to think of a way to visualize some sparser version of the map in rviz on the screen so I can save on PC resources but I would still like the end result in the pcd to be with the same quality as if I had publish_dense set to 1 with all the points, possibly one solutions that comes to my mind is having a separate topic that is more filtered and less dense just for visualization in rviz? Let me know if you have suggestions, thanks!

@XW-HKU
Copy link
Member

XW-HKU commented Jul 19, 2021

You are right. Rviz definitely will use too much RAM if working too long. There are two method may help:

  1. set the Decay time parameter of the topic cloud_registered in Rviz to a smaller one like 100 seconds.

  2. The publish_dense only control the rviz shows, it will not influence the pcd saving. so you can set it to 0 to save the rviz.

May I ask how long is the duration of your bag usually? I am a little afraid that the pcd saving function may meet problem if the duration is too long.

@XW-HKU XW-HKU closed this as completed Jul 19, 2021
@XW-HKU XW-HKU reopened this Jul 19, 2021
@Jaetriel
Copy link
Author

Hey @XW-HKU , thanks for the quick answer. I tried changing the decay time, it does help somewhat yes so I will use it. My bag duration in most cases is around 15-20 min long. Yesterday while testing some more I noticed that sometimes the publish rate of the cloud_registered topic varies and is sometimes less than 10hz ( which is the lidar frequency ). Is that due to there being a lot of points to process in a very little time so the publishing is slowing down due to all the calculation or is there some other reason for this?

@XW-HKU
Copy link
Member

XW-HKU commented Jul 20, 2021

Hey @XW-HKU , thanks for the quick answer. I tried changing the decay time, it does help somewhat yes so I will use it. My bag duration in most cases is around 15-20 min long. Yesterday while testing some more I noticed that sometimes the publish rate of the cloud_registered topic varies and is sometimes less than 10hz ( which is the lidar frequency ). Is that due to there being a lot of points to process in a very little time so the publishing is slowing down due to all the calculation or is there some other reason for this?

The runtime performance of fastlio will not be influenced by the duration, but rviz will eat much computation resources and also the pcd saving function. You can close the rviz and disable the pcd saving function and test again to varify if the runtime performance is influenced.

@XW-HKU
Copy link
Member

XW-HKU commented Jul 20, 2021

And what's your lidar type, computer and its cpu?

@Jaetriel
Copy link
Author

So, after turning off both RViz and the pcd saving function, RAM usage no longer goes up and stays pretty much stable, I also tested with just pcd saving but not RViz and saw that it still takes RAM but a lot less than if RViz is working as well. The sensor I am using is an OS1-128 from Ouster and the pc has an 8-core ARM v8.2 64-bit CPU with 32GB RAM

@Jaetriel
Copy link
Author

But I noticed also that when switching between the publish_dense option for the same exact bag in ros, one scan ended up with 2.1mil points without the dense scan published, and with over 20 mil points when i set the publish_dense back to 1, which was a bit weird to me cause I thought it shouldn't make a difference for the end map

@XW-HKU
Copy link
Member

XW-HKU commented Jul 20, 2021

But I noticed also that when switching between the publish_dense option for the same exact bag in ros, one scan ended up with 2.1mil points without the dense scan published, and with over 20 mil points when i set the publish_dense back to 1, which was a bit weird to me cause I thought it shouldn't make a difference for the end map

The dense_publish is the switch to control the point number of the rostopic \cloud_registered. I am a bit confused about the "end map" you mentioned. What's that? If you are talking about the pcd file containing every registered scans, the topic \cloud_registered shouldn't change the pcd point number (if pcd's point number changed, there must be a bug I need fix, please tell me). As for the localization, it only need a sparse map which is not published and irrelevant to publish_dense parameter.

@Jaetriel
Copy link
Author

@XW-HKU yes, the dense_publish for me changed the number of points in the end pcd file that is being saved ,in one case it was 729MB file with about 21mil points and with dense_publish off it was 66MB with around 2.1mil points for the exact same bag file

@XW-HKU
Copy link
Member

XW-HKU commented Jul 20, 2021

@XW-HKU yes, the dense_publish for me changed the number of points in the end pcd file that is being saved ,in one case it was 729MB file with about 21mil points and with dense_publish off it was 66MB with around 2.1mil points for the exact same bag file

Thank you very much for finding this problem, I will fix it ASAP.

@Jaetriel
Copy link
Author

@XW-HKU thank you, perhaps the problem is in publish_world_frame function the size of the laserCloudWorld is determined by whether you have dense_publish enabled or not

@XW-HKU
Copy link
Member

XW-HKU commented Jul 22, 2021

@XW-HKU yes, the dense_publish for me changed the number of points in the end pcd file that is being saved ,in one case it was 729MB file with about 21mil points and with dense_publish off it was 66MB with around 2.1mil points for the exact same bag file

I just test the conditions the parameter scans_dense_enable is ON/OFF, but the result in my computet shows that the two pcd file are totally same size. So could you please update to the newest code and check again?

dense_check

@Jaetriel
Copy link
Author

Hi, @XW-HKU, thank you for fixing it, I pulled the latest code and tried it again and now the pcd is the same size with and without dense_pub_en

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants