Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for monocular SLAM #23

Open
alperv opened this issue Jul 26, 2012 · 14 comments
Open

Support for monocular SLAM #23

alperv opened this issue Jul 26, 2012 · 14 comments

Comments

@alperv
Copy link

alperv commented Jul 26, 2012

Hi,

Is there a roadmap for monocular SLAM (with a normal camera, not Kinect) ?
Or is it relatively easy to use ScaViSLAM for a monocular SLAM scenario, with minimal code changes?

If so, what parts essentially need to change? I can try to work on such a branch.

@mees
Copy link

mees commented Feb 8, 2013

I would also be interested in this topic

@NH89
Copy link

NH89 commented Mar 19, 2013

alperaydemir & mees,
You may be interested in Hauke's earlier SLAM software library http://openslam.org/robotvision.html . It does sparse mono-slam and loop closure.
Nick

@alperv
Copy link
Author

alperv commented Mar 19, 2013

Thanks, but I've tried that previously. The test cases seg faulted in some
matrix multiplication code right off the bat, which was not trivial to fix.
So I've moved on and did my own thing regarding this. Still would be nice
to have it stable&running in general.

Alper

On Tue, Mar 19, 2013 at 11:12 AM, NH89 notifications@github.com wrote:

alperaydemir & mees,
You may be interested in Hauke's earlier SLAM software library
http://openslam.org/robotvision.html . It does sparse mono-slam and loop
closure.
Nick


Reply to this email directly or view it on GitHubhttps://github.com//issues/23#issuecomment-15106357
.

@mees
Copy link

mees commented Mar 19, 2013

@AlperAydemir have you implemented support for mono in ScaViSLAM? If yes, could you share it please?

@newGenie
Copy link

@NH89
Copy link

NH89 commented Mar 23, 2013

If there are a group of us who would want to work on making a GPL'd mono-slam I'm definitely interested in contributing. I'm particularly interested in developing a dense SLAM that can handle movement and deforming objects by adding time as a highly smoothed fourth dimension.

I recommend this workshop that Prof Andy Davidson is co-presenting at the IEEE ICRA 2013 in Germany this May
http://www.tu-chemnitz.de/etit/proaut/ICRAWorkshopFactorGraphs/ICRA_Workshop_on_Robust_and_Multimodal_Inference_in_Factor_Graphs/Home.html

For those interested in how Mono-SLAM has developed over the last several years, this is a nice overview. http://videolectures.net/bmvc2012_davison_scene_perception/

The TU Graz variational optic flow algorithm mentioned in the lecture above is based on Thomas Pock's thesis and is available here
http://gpu4vision.icg.tugraz.at/

@bradley-newman
Copy link

I'd love to use MonoSLAM in my research, is there any time table for release? Will it include any DTAM: Dense Tracking and Mapping in Real-Time functionality?

@NH89
Copy link

NH89 commented Mar 24, 2013

Q:"is there any time table for release"
A: 'Not immediately'
Fusion of visual and tactile SLAM will be central to my post-doc (hopefully) but I still have at least 18 months to go on my PhD before that. For now I can share what I've learnt so far.

Why do you want monoSLAM? - that would be like wearing a patch over one eye...

Probably the quickest way to make an online mono-SLAM would be to read into the code of Robotvision and ScaViSLAM, then reimplement the core of the Robotvision algorithm as a module of ScaViSLAM. (I did have Robotvision working on Ubuntu 10.04, but now get compile errors on Ubuntu 12.10, and haven't had time to find the cause.)

To learn how modern SLAM algorithms work read the g2o.pdf in the documentation of the g2o library, which explains what hypergraphs are and how to use them to implement SLAM and 'bundle adjustment'. Also look at the tutorials at http://www.informatik.uni-bremen.de/agebv/en/Research and the links to conference papers.

You could also look at VSLAM (not the same as vslam on ROS) http://www.informatik.uni-bremen.de/agebv/en/pub/hertzbergicra11 and the various SLAM codes at http://openslam.org/

Re DTAM-like functionality
That would be the first step of what I need to do. (Note that ScaViSLAM already has a dense tracking module.)

If all you need is a real-time depth map with good resolution, then block matching stereo vision may suffice see http://scholar.lib.vt.edu/theses/available/etd-12232009-222118/unrestricted/Short_NJ_T_2009.pdf or https://code.google.com/p/tjpstereovision/ .
You might improve the object segmentation if you use a total variational optic flow algorithm and/or macro pixel segmentation such as the TU Graz library in my previous post.

@bradley-newman
Copy link

Thanks for the help. Are you actively working on ScaViSLAM or just on related research?

"Why do you want monoSLAM? - that would be like wearing a patch over one eye..."

I'm working with an existing monoscopic data set. I don't need real time mapping but I would want to be able to save out a dense point cloud with camera positions and ideally a RGB texture map.

@NH89
Copy link

NH89 commented Mar 24, 2013

I'm just learning myself. I have visited Prof Andy Davison's lab, but that was after Hauke had finished there. My current work is making a robotic hand with good dynamics and tactile sensitivity. Later I need to do tactile SLAM with the hand, fused with visual SLAM.

@mees
Copy link

mees commented Mar 24, 2013

I understand that Hauke just used the PTAM tracker as a frontend to implement monocular SLAM in ScaViSLAM.

@bpwiselybabu
Copy link

Thanks NH89, the links and your comments are very valuable. I also have been working on monoslam, and stereoslam. Monoslam because one of the robot I work on has weight restriction and have a single camera is based out of requirements, I a using webcam for it which poses a additional interesting problem related to the rolling shutter.

@NH89
Copy link

NH89 commented Jun 19, 2013

I would strongly recommend the two tutorials given at the "Robust and Multimodal Inference in Factor Graphs" workshop at the IEEE ICRA 2013 conference. The papers are currently online at

http://www.tu-chemnitz.de/etit/proaut/ICRAWorkshopFactorGraphs/ICRA_Workshop_on_Robust_and_Multimodal_Inference_in_Factor_Graphs/Program.html

http://www.cc.gatech.edu/~dellaert/pubs/2013-05-10-ICRA-Tutorial.pdf

http://www.tu-chemnitz.de/etit/proaut/ICRAWorkshopFactorGraphs/ICRA_Workshop_on_Robust_and_Multimodal_Inference_in_Factor_Graphs/Program_files/1%20-%20RRR.pdf

Why factor graphs are so useful :
Factor graphs and Bayes graphs are used to provide a clear visualization of the relationship between all the pieces of information being weighed in your SLAM algorithm. They also provide a rigorous and programmable way to rearrange and reduce the elements of the Hessian, and Hamiltonian matricies .... the aim being to produce a sparse upper triangular matrix that is quick to solve.

Here are pages with links to two of the papers from the workshop, (and other related work)

http://www.cc.gatech.edu/~dellaert/FrankDellaert/Frank_Dellaert/Frank_Dellaert.html

http://bnp.mit.edu/?page_id=12

Once you've worked through the tutorial papers, I'd reckon you'd have a good idea how to tackle the rolling shutter and other "Robust and Multimodal Inference" problems.

(disclaimer - I'm not the author of these papers, those guys are much smarter than me :-)

@bpwiselybabu
Copy link

Thanks again NH89. will go though it.

I know there has been a shift towards the slam based approach but for the
monocular problem we are still using filter based approach. I have an idea
about how to incoperate the rolling shutter correction and estimation into
my filter. It is my MS thesis.

For the stereo system we are working on the graph based approach. It does
not suffer from rolling shutter. In it we are trying to integrate an IMU.
Do you have any comments or advice to how to add IMU observation as a non
linear constraint and optimize it??

Saw a few work by Prof. Seigwarts and Kummel

On 19 June 2013 16:29, NH89 notifications@github.com wrote:

I would strongly recommend the two tutorials given at the "Robust and
Multimodal Inference in Factor Graphs" workshop at the IEEE ICRA 2013
conference. The papers are currently online at

http://www.tu-chemnitz.de/etit/proaut/ICRAWorkshopFactorGraphs/ICRA_Workshop_on_Robust_and_Multimodal_Inference_in_Factor_Graphs/Program.html

http://www.cc.gatech.edu/~dellaert/pubs/2013-05-10-ICRA-Tutorial.pdf

http://www.tu-chemnitz.de/etit/proaut/ICRAWorkshopFactorGraphs/ICRA_Workshop_on_Robust_and_Multimodal_Inference_in_Factor_Graphs/Program_files/1%20-%20RRR.pdf

Why factor graphs are so useful :
Factor graphs and Bayes graphs are used to provide a clear visualization
of the relationship between all the pieces of information being weighed in
your SLAM algorithm. They also provide a rigorous and programmable way to
rearrange and reduce the elements of the Hessian, and Hamiltonian matricies
.... the aim being to produce a sparse upper triangular matrix that is
quick to solve.

Here are pages with links to two of the papers from the workshop, (and
other related work)

http://www.cc.gatech.edu/~dellaert/FrankDellaert/Frank_Dellaert/Frank_Dellaert.html

http://bnp.mit.edu/?page_id=12

Once you've worked through the tutorial papers, I'd reckon you'd have a
good idea how to tackle the rolling shutter and other "Robust and
Multimodal Inference" problems.

(disclaimer - I'm not the author of these papers, those guys are much
smarter than me :-)


Reply to this email directly or view it on GitHubhttps://github.com//issues/23#issuecomment-19712547
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants