-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pass in an initial rotation? #249
Comments
Hi Antithing,
I found these links:
https://www.youtube.com/watch?v=bokmduLlBqs
References pre-init for imu here:
https://github.com/tdnet12434/ORB_SLAM2/tree/imu_preint_init_only?files=1
The whole project based on the orb slam paper is here (
https://arxiv.org/abs/1610.05949):
https://github.com/jingpang/LearnVIORB
…On 2 February 2017 at 21:02, antithing ***@***.***> wrote:
I am looking at integrating an IMU into the system. I don't need it
tightly integrated, i just want to pass a starting rotation, so that the
initial rotation matches the IMU when the map/tracking is initialized.
If you have time could you please point me in the right direction to start
this?
Thank you again!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#249>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADRfMcrmM_1PlrjeL3cB-dvvOY2W54Tqks5rYkRsgaJpZM4L1ob2>
.
|
@antithing , at any time you can compare your IMU reading with the current frame pose, determine a correction rotation and apply it to every mappoint and keyframe. The sooner, the faster. Initial triangulation is made from two frames - the second one is the current one. When initial triangulation succeed, the first of those frame is taken as the origin, the center of the map, with rotation zero. But at that time your actual pose correspond to the second frame (the current frame), not at the map center, nor with zero rotation. Two places to do it:
The first is easier, you have access to both first keyframes and all mappoints before they go to the map. The later let you apply your rotation correction whenever you want, but you'll need access to both protected set Map::mspKeyFrames and Map::mspMapPoints, and you'll want to stop local mapper before applying the correction. |
@Kirosoft Thank you! I am on windows, with no ROS, but will take a look at that project to get my head around it. @AlejandroSilvestri, that sounds fairly simple, have you done this already? So I just take:
and replace the rotation value with my IMU matrix, correct? Does this:
use that keyframe's rotation to create the mappoints? Will I need to do this for every new keyframe? or will setting the initial rotation mean that any keyframe created will be based on this, initial one? Thanks again, i very much appreciate your help. |
@antithing , that's right. Almost. I believe this is the perfect point to change the code: You should correct pKFcur and pKFini poses with KeyFrame::SetPose, and all map points positions with MapPoint::SetWorldPos or before in its very construction. pKFini should have the IMU rotation, and pKFcur should preserve the rotation between both keyframes. You must do some matrix products here. In KeyFrame::SetPose(T) , T is a 4x4 Mat , an isometric transformation in homogeneous coordinates. Mat is the combination of a 3x3 rotation matrix, a vertical 3x1 translation vector, and the last row is [0,0,0,1]: [ R | t T is the "displacement" or "isometry" or "rototranslation" of the world with the camera as reference system. If I'm not mistaken, in (X, Y, Z) X points to the right, Y points down, Z points forward. About map points: CreateInitialMap receive the map points coordinates, and assign them in a for loop when creating with new MapPoint(...). You must acordingly rotate this coordinates to match IMU before creation, or correct them with MapPoint::SetWorldPos. Very sorry if it sounds confusing. |
@AlejandroSilvestri , I would like to know more about your naming conventions regarding these matrices. (couldn't find any hints about that in the code). I understand what T, R and t are. Thanks in advance. |
@diesbot : I believe the two suffixes "wc" and "cw" correspond to the coordinate system transformations, world to camera coordinate system and camera to world coordinate system, respectively. Using the classic pinhole camera model and representing the camera as a point, the absolute camera pose can be simply represented by the rotation and translation from camera to world coordinate system. Indeed, the camera position can be written as tcw = -Rwc' twc and the camera orientation as Rcw = Rwc'. |
@AlejandroSilvestri , sorry to bother you again. I have an imu running into stereo slam, to give an initial rotation value, and add stability. BUT sadly, it has lessened stability instead! Now the returned camera position jumps around a lot, suddenly popping a metre to the side and other such things. The initial points and position look good, but as soon as I move the camera, I see jumps, rotation flipping, and weirdness.. If you have a moment, could you have a look at what I have done and tell me where I am going wrong? in Tracking.cc:
Added:
added:
|
Please enlighten me. Are you trying to set an initial rotation, or are you trying to apply many rotations while mapping? To set an initial rotation (the main topic of this issue), you only need to set it once, during map initialization, or, more difficult, at some random actual keyframe. Essentially you rotate all the map (keyframes and mappoints). Because monocular visual SLAM has no reference (no origin, no scale, no rotation), you can arbitrarily assign one. This is very different to apply IMU rotation on many (or every) keyframe, because you need a way to compensate orb-slam2 visual measurement of rotation with IMU rotation, and you'll need some filter like Kalman's, to what I believe would be a huge work, like a new paper. visual measurement of rotation is far more accurate than IMU. On the long run, IMU can compensate drifting (with a filter). |
Hi, thanks you for your reply. I just want to set an initial rotation, so that the rotation of the point cloud, and translation vector of the camera pose, is aligned to the IMU rotation. (this is with stereo ORBSLAM2) If i set it at initialization, will all the points created afterward, and the keyframes, keep this rotation? Is the way i have done it above correct? |
I'm not familiar with stereo code, but I believe it may be the same thing. You must update mCurrentFrame pose too. And I think may be better to assure initialFrame's and LastFrame's poses be consistent. If you do these in initialization, not need to do so in CreateNewKeyframe. Perhaps it will double your rotation. |
Hi everyone, I am facing exactly the same issue as @antithing. I have the modifications especially concerning the Any Idea? |
Hi. i got this working fine. You need to make the adjustments in my code above. ONLY at that point |
Thank you very much antithing 👍. |
@AlejandroSilvestri I am still unclear about how this works.
Referring your remark, "pKFcur should have the IMU rotation, and pKFini should preserve the rotation between both keyframes. You must do some matrix products here.", should it not be the opposite? That pKFini should have the IMU rotation and pKFcur should preserve the rotation between both the key frames? In general, from what I understand, if the Twc ie world to camera for the initial pose is know, the Tcw_initial can be calculated and assigned to pKFini. Then in Line 630 in f2e6f51
mCurrentFrame.SetPose(Tcw_new);
where Tcw_new is computed using some matrix multiplications. I may be missing something very fundamental here. Look forward to your remarks. |
You are right, my mistake, already edited my former answer. Thank you. |
@AlejandroSilvestri @hemangchawla The following is my approach: Also, after doing this in Tracking::MonocularInitialization(), the point clouds handling in CreateInitialMap is still needed ? Thank you! |
You want to SetPose(Tcg), where Tcg = Tcw * Twg So, your IMU rotation must be in the form of Twg. If you have Tgw, you must invert it. Also look for edge orientation:
|
@AlejandroSilvestri Thank you for you reply!
Very sorry if it sounds confusing. Thank you again and sorry for bothering |
Hi,I have many question about ORB-SLAM2.One of them, in "pNewMP->AddObservation(pKFini, i);" ,the "i" is changed with looping, but the "pKFinit" is not changed.so,"mObservations[pKF] = idx" just have a pKFini, Do you think this understand is right? Others,in the stereo situation,firstly enter "TrackReferenceKeyFrame/SearchByBoW", where did the program calculates the "pKF->mFeatVec;"? |
I'm so sorry I missed your last question a year ago. I can see you are rotating the initial map points. It's fine if imu_rot is coherent with orb-slam2 space (with the axis order). You know, there are 48 ways to define a 3d rotation, mixing axis and directions of rotations, and you must use the same. It appears to me you are missing keyframes rotation. |
@AlejandroSilvestri maybe, you can look at my questions.Please help me. My english is very poor,I hope you can undertand me. |
@AlejandroSilvestri This question should give you,but it is wrong object,because I am a greener. |
Hi @mathuse , Each MapPoint has observations, which are keyframes observing them. The initial map consists of two keyframes and some mappoints, each of them with two observations. The loop you mentioned adds pKFini as an observation to each mappoint. During mapping, new keyframes are created and mappoints receive new observations. mFeatVec is calculated when keyframe is created, and sometimes in current frame, when a relocalization takes place. |
@AlejandroSilvestri Thank you very much. I think I need some time to understand it. Maybe I have more questions sooner or later.Thanks again. |
@AlejandroSilvestri Here are four questions to need to consult you.
|
It would be better to refer to the code with permalinks to code lines. Guessing: 1- ComputeBoW is not cheap, it is a heavy process, often delayed until it is needed. Frames usually doesn't need BoW, that's why it is not computed before initialization. 3- Why should you delete new mappoints? You need them, they are part of the map you are building. The map will grow as you explore new areas. Redundant mappoints and keyframes will be eliminated in the culling step. 4- Visual SLAM, like homography, epipolar methods and any geometry calculation benefit from more features only if they are evenly distributed on the image. There is no benefit on many features packed together in one place. FAST doesn't guarantee distributions, its parameter is a threshold. The grid limits the number of features in each cell, and repeat FAST with a more tolerant threshold when there are too little features in that cell. The other benefit is that you can get the features already distributed into cellls. |
@AlejandroSilvestri thanks for your reply. |
can you show me how you get your initial pose from imu data(a_t, w_t, timestamp)? |
邮件已收到! ***@***.***
|
您好,来信已收到,谢谢!
|
I am looking at integrating an IMU into the system. I don't need it tightly integrated, i just want to pass a starting rotation, so that the initial rotation matches the IMU when the map/tracking is initialized.
If you have time could you please point me in the right direction to start this?
Thank you again!
The text was updated successfully, but these errors were encountered: