Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reference material for the SlamDynamicConfig class / Is implementation of search_map_by_projection same as ORB-SLAM2? #81

Closed
Mechazo11 opened this issue Sep 12, 2022 · 1 comment

Comments

@Mechazo11
Copy link

Hello @luigifreda

Could you provide some reference material for the SlamDynamicConfig class? It seems to be an important part of the pyslam pipeline but there is not much information about it.

Regarding the search_map_by_projection() method from the search_points.py file, this method alone is adding the highest amount of delay (on average 5 seconds). I was wondering whether the implementation is the same as that of ORB-SLAM2 (i.e ORB SLAM 1)?

With best,
@Mechazo11

@luigifreda
Copy link
Owner

luigifreda commented Jan 23, 2023

Hi,
thanks for your feedback.

  1. SlamDynamicConfig is helping in robustly estimating the descriptor distance standard deviation by using MAD.
    https://github.com/luigifreda/pyslam/blob/master/utils_features.py#L133
    Moreover, see for instance: https://en.wikipedia.org/wiki/Median_absolute_deviation

  2. search_map_by_projection() is working in a very standard and well-known way. As for the performance, did you have a chance to read the main README
    "
    You can use this framework as a baseline to play with local features, VO techniques and create your own (proof of concept) VO/SLAM pipeline in python. When you test it, consider that's a work in progress, a development framework written in Python, without any pretense of having state-of-the-art localization accuracy or real-time performances.
    "
    As you may know, python is not the best language to write a real-time SLAM framework. It does not allow real multi-threading capabilities: for instance, see this discussion
    https://stackoverflow.com/questions/4496680/python-threads-all-executing-on-a-single-core
    All the python threads will actually run on a single core due to GIL limitations (assuming we are using the default CPython). A better approach could be achieved by using the multiprocessing modules. Clearly, there is always space for new improvements. Feel free to open a PR for a better implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants