Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce memory footprint #449

Open
molinadavid opened this issue Oct 10, 2019 · 3 comments
Open

Reduce memory footprint #449

molinadavid opened this issue Oct 10, 2019 · 3 comments

Comments

@molinadavid
Copy link
Contributor

Hi Mathieu,

I would like to ask how to reduce the memory footprint when loading databases, we had a problem using Nvidia Xavier with 16 Gb of ram memory, the Database has 3.5 Gb in size and when we where launching the process the full Ram memory would fill up and then the process would crash.

We solved this by adding a swap partition and even then I could see the size grow up to 4.0 Gb on the swap memory after filling up the whole Ram.

I attach a video of the process and will share the database with you if necessary.
memory footprint

So to summarize this issue:

  • How can we reduce the memory footprint for large databases
  • Would it be possible (if present) to use first the swap memory before the physical Ram memory?

As always thank you very much for your help.

Regards,
David

@matlabbe
Copy link
Member

Hi David,

When reloading a database, the RAM would be used for the dictionary and the local occupancy grids. What is the size of the dictionary? How many features are extracted by frames? What I see is there are about 900 locations in the database, with default 1000 features per frame, it would generate at worst case 900 000 words in the dictionary. If SIFT or SURF 128 float is used, it would require ~460 MB to hold the full dictionary in RAM.

For the local occupancy grid, I see in the log that the resolution is quite small, about 1 cm grid cell size (default is 5 cm). If the local occupancy grids are 3D, ray tracing is enabled and grid range is high, this could need a lot of RAM to hold them in RAM. If you cannot share the database, can you take a screenshot of the Info view in DatabaseViewer?

To reduce dictionary size, we could use less words per frame (Vis/MaxFeatures). To reduce the local grid size, increase Grid/CellSize, set a maximum range Grid/RangeMax, use 2D occupancy grids if possible Grid/3D=false.

I see that the update rate of RTAB-Map is also 5 sec, which is quite high. If you loaded the database in localization mode, it should be a lot less (<1 sec).

cheers,
Mathieu

@molinadavid
Copy link
Contributor Author

Hi Mathieu,

I'm sorry it took this long to confirm about this issue, we have been testing a lot and the memory footprint did reduce a lot compared with before, we can now get consistent rate for RTAB-Map between 0.5 and 1.5.

Now we want to optimize even more to reduce the computation power as much as possible,
This is the current launcher that we are using right now rtabmap_navigate.launch and I will share a database sample with you by Mail

Best regards,
David

@matlabbe
Copy link
Member

matlabbe commented Jan 7, 2020

Hi David,
Sorry for the delay to answer, the end of my last semester was intense.

For mapping quality, remove RGBD/NeighborLinkRefining as the odometry is already quite good and you don't use a lidar (RGBD/NeighborLinkRefining seems false in the launch but was not in the database). For better marker optimization, if we use marker detection embedded in rtabmap (RGBD/MarkerDetection), Marker/VarianceAngular=9999 and Marker/VarianceLinear=0.0001 seems to optimize appropriately (the first cone seen the last time near the end is correctly put back over the original position):
Screenshot from 2020-01-07 10-52-28

To save some computation power, RGBD/ProximityBySpace could be set to false as you are using landmarks. Setting RGBD/NeighborLinkRefining to false as above will reduce also some time. For such large spaces, setting Grid/CellSize to 0.1 instead of 0.02 will save some time in grid map generation, unless you really need an occupancy grid with 2 cm cell resolution. Eventually, in production setting, you could set Mem/BinDataKept to false to save around 44 ms per frame, however, as long as you are debugging, I strongly recommend to keep binary data. Keeping binary data is also useful to be able to recover a database if the robot crashes or lose power suddenly (database can be corrupted if we don't exit rtabmap node normally).

cheers,
Mathieu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants