-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues with Pose Priors #1756
Comments
Hello @csparker247 Your pose workflow is correct, it is the pipeline we follow for importing external dataset (see here) Here what I would test:
In order to check if you pose prior error is larger than what you expect I would do the following:
Let me know if this guidance is useful. |
Thanks very much! This gives me a good number of tests to work on. I did just get more accurate camera mount measurements from the engineer, and those were off by a number of units, so I know that was at least contributing to my poor reconstructions. I'll be back in touch once I've had a chance to test these things and see what I've found. |
Thank you for your detailed feedback. It seems that the scene is not easy to match (a lot of redundant information in the checkboard). Did you try with a texture object (like a picture or a newspaper?) It seems that we are not very robust to this logo repetition... in the incremental case since it must provide some wrong. camera initialization since the start. Regarding the prior (it could be nice to share your data and let us take a look) |
Alright, another update. Things are slowly starting to make more sense. I made a test dataset of the most simple case: the five cameras pointed at the same spot on the ground plane. The spot chosen has a prominent orientable feature and no repetitive background features (like the grid pattern). You can download it from here. This includes the images and two SfM files: one created with Using this dataset, I've been able to run some tests. All tests were run with SIFT features on HIGH. If I run:
then the resulting reconstruction looks correct and I get camera poses that are roughly what I expected relative to each other. What is not expected is that my camera orientations are all rotated 180 deg around the optical axis. The top face for all of these frustums should actually be the face directly opposite the one that's colored red: So that's odd. I've been having trouble getting CloudCompare or Meshlab to do registration on my laptop, but I'll hopefully get that ironed out soon and we can see just how off the SfM poses are from my pose priors. I also tested running the following commands with a pose-prior sfm: # SfM has priors, but don't init with them
openMVG_main_IncrementalSfM2 -i sfm_priors.json -m matches/ -o recon/ -M matches/matches.f.bin -S MAX_PAIR
# SfM has priors and do init with them
openMVG_main_IncrementalSfM2 -i sfm_priors.json -m matches/ -o recon/ -M matches/matches.f.bin -P -S EXISTING_POSE
# Direct triangulation
openMVG_main_ComputeStructureFromKnownPoses -i sfm_priors.json -m matches/ -o recon/sfm_data_structured.bin -f matches/matches.f.bin -d The first produces a decent reconstruction and the cameras are in the correct cross pattern, but the center camera is closer to the ground plane than it should be. The second puts the cameras in incorrect positions and fails to give a decent reconstruction. For the final command, the feature points are more coherent than they were on my previous dataset, but they still are more of a clump than a surface, and any further reconstruction with MVS fails: As before, adding bundle adjustment to this command results in an empty scene. So, all of that reported, here's some follow-up things:
Yes, it's become very clear that my repetitive backdrop is causing problems. I'm going to implement some sort of random dot pattern that I can put on my backdrop to try to reduce bad matches. I don't have a good answer for the color checker or scale marker (i.e. ruler). I will likely need to include one or both. I suppose I can always make a "ruler" from two coded dots of known distance.
There's the small test dataset I mentioned above. Additionally, the full image dataset can be download from here and corresponding sfm files can be downloaded here. There's a lot happening in this dataset, so feel free to ask me questions about any of it. The most important thing to note is that the cameras are not pointed at the same spot on the ground plane. For example, the south camera frustum crosses the center camera frustum and captures the area just to the north of the center camera's FOV. This has been very confusing to work with and makes frustum filtering pretty much useless. I'm going to be changing it on the hardware soon. |
Oh, I also wanted to ask about rigidly coupled cameras (#580). This is almost exactly the use case I have. I'm betting that since the branch is multiple years old, it probably would need some significant work to get running on top of a newer develop. Any thoughts on the likelihood that this feature would be of benefit to my reconstructions? |
Hello @csparker247 Happy to see that you were able to make some If you want to compare two SfM camera poses coordinates you can do it with this function https://github.com/openMVG/openMVG/blob/develop/src/software/SfM/main_evalQuality.cpp#L160 (just load your two sfm_data scene and add the parameters)
You are right the branch #580 was a prototype feature branch and I think it would take quite a lot of work to merge develop in it. |
Thanks @pmoulon. I think my pose import is incorrect. I've rotated all of my cameras to match the "upside down" frustum, and Incremental2 with imported poses is now working on a small test case. I think I might have been confused because I was coloring the wrong face in the export frustums code. Can you test the frustum color export with whatever known poses you might have? Even the identity pose. I think I might be coloring the incorrect face as the top of the frustum... EDIT: Well that was disappointing. False alarm on the reconstructions working with priors. I was setting my script's flag to use priors, but then was overriding the sfm initializer with MAX_PAIR, so poses weren't actually getting used. Regardless, I'm still not confident in the frustum coloring, so that question remains. |
Hello @csparker247 I don't understand yet why Incremental2 & the Bundle Adjustment reject your points, we would have to take a look to the residuals values and check what is happening. In the past, we were able to import multiple datasets successfully, but yes we could add a wiki page about the process: |
Tested, and it's definitely a bug. I've submitted a patch here: #1773
I now have a good dataset that I feel confident is a good test set. I've run it through a full OpenMVG + OpenMVS pipeline with various settings (global, sfm2, sfm2+priors), and I've kept all of the intermediate files. So I can provide all sorts of things for you to look at. I also have a new, small test dataset that you can run yourself. The zip has the images and an SfM file with the poses already imported. There's also a text file with the line you'd need to add to the sensor width database if you need to import the images yourself. This dataset looks excellent with GlobalSfM, has problems with Incremental2 (with and without priors), and does not reconstruct at all with direct triangulation + bundle adjustment.
I can definitely work on that. Thanks again for all of your help! |
Just to keep things updated here for posterity and to help others, I've used the Charuco boards implementation in OpenCV to improve my pose estimates relative to my motorized scanning head. Depending on the camera, my previous pose estimates had minor to major deviations from what the Charuco boards gave me. The good news is that the improved pose estimates further improved the results returned by Incremental SfM2 and direct triangulation. However, neither yet match the (admittedly subjective) reconstruction quality of running Global SfM on my full dataset. I had a couple of goals for importing my pose priors: consistent reconstruction quality across samples, exporting scenes with the appropriate real-world scale information, and improved reconstruction time. My performance bottleneck is now no longer in OpenMVG, but in the further MVS reconstruction steps, so that goal is achieved. With the success of using the Charuco boards, I can import scene scale via another method, so that is also acceptably accomplished as well. The only thing that really remains is consistent reconstruction results. While it's probably not ideal to have to run SfM for every single dataset (increasing the chances of introducing inconsistency in results), it's certainly an acceptable fallback if my global results remain as good as they have been. Also, and I haven't tested this, maybe my mechanical error is minimal, and I can use a "calibration" global reconstruction result to preinitialize my scene for every scanned object. Then I could skip the recomputation of SfM for every dataset and just run BA. Just some random thoughts. Anyway, I think I'm winding down this inquiry for now. I still want to write some sort of guide to help others import their own datasets. To that end, I've released my SfM utilities package on PyPI and GitHub. I will definitely write a guide for using that package, but will also try to do so for the OpenMVG interface as well. Thank you for all of your help, @pmoulon! |
I've been trying to import pose priors from an automatic acquisition rig in order to make my results more reliable across different objects, but I've been having issues with the reconstruction workflow. The scanner is a custom rig with five cameras mounted to a motorized XYZ stage. The system is intended to capture flat-ish surfaces (like topographical models) so all cameras point down at the ground plane. The cameras are arranged in a cross pattern where the central camera faces straight down, and the wing cameras point down at a 45 deg angle:
The system runs step-and-shoot acquisition along a customizable path (generally a grid pattern), producing 5 images at discrete acquisition points along the path. The acquisition points are set so that every point in the interior scan area is seen by all five cameras. The final dataset includes the images for all five cameras, the radial (k3) distortion parameters for the cameras, the stage position at each acquisition point, and the relative offset of the cameras from the stage position. I have custom code which converts this metadata into an
sfm_data.json
file. The work I did on #1702 was so that I could verify that my camera poses were being converted correctly.I know there's likely to be some error in my camera poses due to both measurement and mechanical errors in the XYZ stage, so I need to do some sort of automatic adjustment on the scene to minimize reprojection error. Following advice that I've seen in other discussions, my workflow has been as follows:
-f NONE -S EXISTING_POSE -P
When I run 4a, the filtering step prior to BA removes all of my poses and I get an empty scene. If I disable BA, I get a scene where all of the feature points are dispersed incoherently below the cameras. Not surprising since I assume my camera positions are off:
When I run 4b, my cameras are adjusted way out of position and my feature points do not seem any more coherent than they did with 4a:
So now I'm left not really sure where my problem lies. I see a number of questions that are open:
I've been looking at this long enough that I would appreciate any help or guidance that anyone has to give. I would also like to contribute back to the project in the form of documentation surrounding initialization with pose priors since that seems to be somewhat lacking. What would be the best way to do that?
The text was updated successfully, but these errors were encountered: