Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discuss map save & load #246

Closed
AlejandroSilvestri opened this issue Jan 31, 2017 · 20 comments
Closed

Discuss map save & load #246

AlejandroSilvestri opened this issue Jan 31, 2017 · 20 comments

Comments

@AlejandroSilvestri
Copy link

When I needed to save and load maps, I saw many ORB-SLAM2 forks doing it in many ways, some not working, most saving unnecessary data.

I went on a deep journey and got a decent working minimum binary file size, avoiding saving a lot of members: some ephemeral, some rebuildable. But I did it on a modified version of ORB-SLAM2, working only on monocular.

If someone want to debate and code a neat serialize class for ORB-SLAM2 to share, this is where to start.

@emphos1
Copy link

emphos1 commented Feb 2, 2017

hey @AlejandroSilvestri
As far as I know, we have some available codes for that purpose: Shomin's code, Poine's code and your code. Because you are an expert in this field I would like to ask you a question. Which one is really working without ROS?
So far, I have tried Shomin's code (mono_tum, mono_kitti) but it seems it could not track the camera because it fails in initializing step. In other words, when I run it, I could not see anything in the viewer (trying to initialize). Is Shomin's code only working for rgb-d?
Thanks.

@AlejandroSilvestri
Copy link
Author

@emphos1 , I know of three save/load implementations, besides mine:

The last one uses boost::serialization, which I believe is the best choice for long term maintenance and versioning. All three saves in binary, which minimizes file size but makes it difficult to read outside orb-slam2.

I believe none of them need ROS, at least the save/load code doesn't uses it. I didn't try them, I don't know how stable they are.

Poine's/Shomin's works only for monocular, looses map information, it will work for you depending on what are you going to do.

On the contrary, MathewDenny's saves a lot of ephemeral data just in case.

Mathew discover a clue to missing pointers that can crash the application during save and load. I myself was investigating this issue, but couldn't completely solve it.

@void-robotics
Copy link

Which would you say is the most stable load/save fork, and can you give a code example of loading and saving?

@AlejandroSilvestri
Copy link
Author

I didn't try those code, I read them to inspire mine.

I'm using boost serialization, a nice scalable approach.

As an example you can start with
https://github.com/AlejandroSilvestri/os1/blob/PuntosLejanos/src/Serializer.cpp

With boost serialization you write a serialization method on each class you want to serialize.

@void-robotics
Copy link

Thanks, although I'm not great at reading Spanish, I will take a look.

@microcompunics
Copy link

Trying to port Poine save & load map code in windows it works in debug mode but not release. And in ubuntu it works.

Is someone tried to save & load map in windows?

@AlejandroSilvestri
Copy link
Author

@nathangeorge1 , yeah, I know, I'm sorry, I commented ORB-SLAM2 for my students. It's a pitty you can't get an easy to use translator. I believe translators won't work on code.

@microcompunics , wow, I'm sorry to hear that, that's why these projects are made on Linux.

The only guess I can add in my experience with save and load, I found a lot of broken pointers that crash the application more on release than on debug mode.

@antithing
Copy link

@microcompunics I am looking into using the MathewDenny fork save/load functions, with stereo, on windows. Have you managed to get any save/load working?

@microcompunics
Copy link

@AlejandroSilvestri thanks for your reply.

@antithing still trying on windows no result yet.

@Harry-675
Copy link

@microcompunics I am trying to save and load orb slam2 map in windows,but got an problem,Have you solved it?

@void-robotics
Copy link

I approached it a different way; I sent all necessary data over a ROS topic; and within ROS, you can save data on topics with "rosbags". ROS doesn't support Windows but ROS2 will.

@zltrock
Copy link

zltrock commented Jun 22, 2018

@nathangeorge1 I am using the ROS too, what is the topic you publish? pointcloud2? where can I get the source code?

@void-robotics
Copy link

@zltrock I can't remember too much as I did this years ago. If you know what data you need, you can publish on a new topic of your choice. If you're asking me which topic you should be publishing, I just can't remember.

@Shubhamvithalani
Copy link

Shubhamvithalani commented Jan 11, 2019

@AlejandroSilvestri I was able to save and load map using serialisation. I tried to relocalise on a pre-loaded map and was able to do it successfully. Now if I want the re-localised position with respect to the Global Map. Can you suggest any methods. My aim is to to travel from a point A to point B inside the map using SLAM. Thus I need to establish global position of A,B

@AlejandroSilvestri
Copy link
Author

AlejandroSilvestri commented Jan 11, 2019

@Shubhamvithalani , not an easy task, you can think a lot of ways to do that, let me start with I think is the simplest approach.

First an easy: realize A and B positions in orb-slam2 world reference, and express your destination in this reference.

Second and simple approach: convert from one world to another. Choose two real world poses A as the real world reference, and the real position (in meters) of a point B.

With orb.slam2 running with you map, place your camera on a chosen pose A in the real world and get the pose matrix Tcw = currentFrame.mTcw. We can call it Taw. Repeat with B, get Tbw. We are only interested in B position, B orientation doesn't matter.

Twc pose (Tcw inverse) express the camera pose in the orb-slam2 world reference. To transform to real world apply Taw:

Tac = Taw * Twc

Now you have poses in A reference, but scale still is not real. Here is where you use B. Get AB distance both in real world (in meters) and in orb-slam2 world, divide them to get the scale factor. You multiply this factor to Tca position vector to transform it to meters.

With a little math you can pack the factor into Tca, so all you have is a unique transformation matrix to go from orb-slam2 world to real world.

A more laborious approach: you can transform all mappoints positions and keyframe poses to the new reference, so orb-slam2 world will match with real world.

A more complex approach: to eliminate drifting, you can adjust many keyframe's real world poses and then bundle adjust. This is only the idea, the real work on the map is far more complex, similar to closing loops.

@Shubhamvithalani
Copy link

@AlejandroSilvestri I am going with the first and easy approach. I managed to figure out my current position and destination which is in the form of translation vectors and quaternions. Now I want to compute the amount of translation, angle of rotation for going from (N)th to (N+1)th Keyframe. I am a beginner at navigation and path planning. Can you suggest some methods for the same.

@AlejandroSilvestri
Copy link
Author

@Shubhamvithalani ,

Pose is a rototranslation 4x4 matrix in homogeneous coordinates. Last row is always 0, 0, 0, 1 . Last column is 3d translation with a fourth element 1. First 3x3 submatrix is rotation matrix.

Tcb is Tbc inverse
Tab = Tac * Tcb

Tab has translation and rotation from A to B (i.e. B in reference to A).

@Shubhamvithalani
Copy link

@AlejandroSilvestri ,
Thanks for your prompt reply and clearing out lot of doubts. After getting the Tab matrix, since I am using a ground vehicle I am ruling out rotation along x and z. So by inverse kinematics I can calculate the angle of rotation (theta) needed for reaching to point B. Is this approach correct? Or do I need to make changes.

@AlejandroSilvestri
Copy link
Author

@Shubhamvithalani ,

It is correct.

Let say your vehicle has a camera (point C), we call A the origin of real world, and you want to go to point B.

Tac is the camera pose (the vehicle pose) in the real world. vab is point B's position vector in the real world.

vcb = Tca * vab

vcb vector points to destination B in the vehicle reference system. It's a ground vehicle, so you can ignore the vertical component Y. Modulus is the distance to B. And you can easily get the angle too.

@Jiaxin8122
Copy link

@Shubhamvithalani , not an easy task, you can think a lot of ways to do that, let me start with I think is the simplest approach.

First an easy: realize A and B positions in orb-slam2 world reference, and express your destination in this reference.

Second and simple approach: convert from one world to another. Choose two real world poses A as the real world reference, and the real position (in meters) of a point B.

With orb.slam2 running with you map, place your camera on a chosen pose A in the real world and get the pose matrix Tcw = currentFrame.mTcw. We can call it Taw. Repeat with B, get Tbw. We are only interested in B position, B orientation doesn't matter.

Twc pose (Tcw inverse) express the camera pose in the orb-slam2 world reference. To transform to real world apply Taw:

Tac = Taw * Twc

Now you have poses in A reference, but scale still is not real. Here is where you use B. Get AB distance both in real world (in meters) and in orb-slam2 world, divide them to get the scale factor. You multiply this factor to Tca position vector to transform it to meters.

With a little math you can pack the factor into Tca, so all you have is a unique transformation matrix to go from orb-slam2 world to real world.

A more laborious approach: you can transform all mappoints positions and keyframe poses to the new reference, so orb-slam2 world will match with real world.

A more complex approach: to eliminate drifting, you can adjust many keyframe's real world poses and then bundle adjust. This is only the idea, the real work on the map is far more complex, similar to closing loops.

Hi, I am going to convert the coordinate from ORB_SLAM (Mono) to a lidar coordinate and read this comment but have a few questions. In our case, camera-lidar calibration method is not suitable. So I wanna use trajector to recover the R,t. But due to mono camera I don't know the scale in orb slam. I am so confused how can i do. Now I have trajector file from Lidar and orb slam. My idea is to use "sqrt((x - x_prev)(x - x_prev) + (y - y_prev)(y - y_prev) + (z - z_prev)(z - z_prev))" from continuous 2 frames in lidar trajector divide by "sqrt((x - x_prev)(x - x_prev) + (y - y_prev)(y - y_prev) + (z - z_prev)(z - z_prev))" from orb slam to get the scale. But I dont know whether this scale is which I want. If it is, i apply "cur_t_ = cur_t_ + scale*t;". This is the absoulte scale?
I am totally new to SLAM. Very grateful for your answer!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants