New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Experimental] Add a new SfM pipeline based on stellar reconstruction #2070
Conversation
…Engine src/openMVG/sfm/pipelines/stellar/sfm_stellar_engine.cpp src/openMVG/sfm/pipelines/stellar/sfm_stellar_engine.hpp
- Each 2-uplets of triplets are triangulated with IDW2view triangulation instead of Nview Algebraic - the 2view solver check cheirality so it should retrurn less gross ouliers - the triangulation should be faster - According to the paper (Triangulation, why optimize ?) IDW has a better 3d estimation accuracy than other triangulation solvers. More specifically for low parallax configurations IDW has better estimation than DLT solvers, so depth estimation should be better.
- additionnaly, use only inliers from RA for translation estimation
Improve (a bit) stellar SfM
Conflicts: src/software/SfM/CMakeLists.txt
- Since we are using track on 3 view (remove unecessary checks) - Use less restrictive parameter for BA
Alright, I've had a chance to run comparison recons on the same dataset. As backstory, my scanning setup is essentially the one I described way back in 2020: five cameras which are fixed relative to each other, but which are traveling a grid path on the XY axis above a mostly planar object. The dataset is 780 images (5MP each). My motion is motor controlled, so I have estimates for the extrinsics for each image which haven't ever been good enough to triangulate on their own. Instead, I initialize Reconstruction commands
Subjective ResultsVisually, the reconstructed camera positions are fairly similar. Since I don't have ground truth for this dataset, it's hard to know how close they are to expected, but they seem reasonable relative to the detected scene. A few notes:
Empirical ResultsTo try to put some numbers to this, I've aggregated the results from
I think the takeaways from this table are:
|
Thank you @csparker247 for this details test and report. Appreciated the level of details! Just a quick question:
Nit: I would just set 1 intrinsic per camera (since each camera can be a little different) Bowing:
Floating points:
Scale:
I'm glad that you were able to test flawlessly the new pipeline and to notice that the scene seems to have more 3D points.
|
Thanks @pmoulon!
All five cameras share the same body and 4 have the same type of fixed focal length lens. My assumption was that the intrinsic variations between lenses of the same make and model would not make a significant difference in practice. I have admittedly been very lazy about precalibrating my cameras recently, though, so I suppose I should measure this and make sure my I'm still initializing
This tradeoff makes sense. So I should stop being lazy and calibrate my lenses ahead of time 😄. I'll try that and re-run stellar. I will say that I ran a stellar + robust job yesterday and it doesn't appear to improve the bowing. However...
Adding a robust step did remove the worst of the outlier features, but quite a few still falling behind the imaging plane. Obviously not super important to the discussion here, but thought I should at least mention it:
Good to know! I'll keep testing this features as it develops and let you know how it goes! |
Command line:
openMVG_main_SfM -i $matchesDir\sfm_data.json -m $matchesDir -o $reconstructionDir -s STELLAR -g <#>
openMVG_main_SfM -i $matchesDir\sfm_data.json -m $matchesDir -o $reconstructionDir -s GLOBAL -M $matchesDir\matches.e.bin
openMVG_main_SfM -i $matchesDir\sfm_data.json -m $matchesDir -o $reconstructionDir -s INCREMENTALV2
openMVG_main_SfM -i $matchesDir\sfm_data.json -m $matchesDir -o $reconstructionDir -s GLOBAL -M $matchesDir\matches.e.bin
openMVG_main_SfM -i $reconstructionDir\sfm_data.bin -m $matchesDir -o $reconstructionDir -s INCREMENTALV2 -S EXISTING_POSE
openMVG_main_SfM -i $matchesDir\sfm_data.json -m $matchesDir -o $reconstructionDir -s STELLAR -g 2
openMVG_main_SfM -i $reconstructionDir\sfm_data.bin -m $matchesDir -o $reconstructionDir -s INCREMENTALV2 -S EXISTING_POSE
|
@4CJ7T Thank you for testing and comparing the different pipeline, this is interesting to see that Cannot wait to see if you will test on more scenes ;-) |
History and background:
Stellar / n-uplets pod
(one central image + n satellites) as a convenient way to bootstrap an Incremental SfM pipeline fromN images
instead of 2, we are introducing here a SfM pipeline usingn-uplets
as input of a Global SfM pipeline.What is this Stellar SfM pipeline?
We see this pipeline as a natural evolution or OpenMVG research progress for SfM due to the following fact:
Pros:
n-uplets
orstellar pods
allows us to refine local motions and make relative motions consistency tighter; resulting in improved robust behavior on Rotation and Translation averaging results.skeletal graph
but adapted ton-uplets / stellar
configuration by selection X Maximum Spanning tree (forcing local stars configurations).Cons:
You can find below a short summary comparison of the various reconstruction algorithms:
How to test it?
$ openMVG_main_SfM -i <sfm_data.json> -m <match_path> -o <output_reconstruction_folder> --sfm_engine STELLAR
How can you provide us with feedback?
Thank you for @rjanvier for his contribution to this development and for helping stabilize the stellar reconstruction algorithms.