Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fastApprox match always gives same feedback as bruteForce() match #5

Closed
dgtlmoon opened this issue Dec 2, 2016 · 2 comments
Closed

Comments

@dgtlmoon
Copy link

dgtlmoon commented Dec 2, 2016

Not quite sure if I'm misunderstanding the threshold settings, so at https://github.com/dgtlmoon/ORB_LATCH_K2NN I have a pipeline of ORB / LATCH / K2NN, my results are that when I --search using fast-approx I was expecting that I would get more (or atleast some) descriptors that match that are not in the same set that I searched for.

My app scans images in a directory and appends all of their descriptors to a training.dat file for a K2NN search

I'm seeing that ONLY the same collection of descriptors in my query exists in the training and nothing else... even tho a lot of the images should have atleast a couple of similar descriptors.

also suspicious is that with fastApprox() all of the .t's (training vector index) is contiguous.. I would have expected that atleast some might be partial (threshold) matches

so hmm.. not sure how I would rank images by "similarity" (or by their K2NN ranking) here

@dgtlmoon
Copy link
Author

dgtlmoon commented Dec 4, 2016

Hmm. tho in another case fast-approx gives no results whilst brute-force gives the kind of results im expecting.. so something i'm not understanding here

@komrad36
Copy link
Owner

komrad36 commented Dec 5, 2016

Hi,

I don't currently have time to examine external code-bases, but I can say that fastApproxMatch is an experimental attempt to do just a single twiddle pass of multi-index hashing. It will only return guaranteed matches, but it will only find such matches if the query and training differ by less than (32 - threshold), so it could work well in a high-frame-rate VO application where each frame differs very little, but is not intended, and not likely to work, for a SfM application like yours, where thresholds must be high as matches will still differ a lot.

I suggest sticking to bruteMatch for your application.

If even bruteMatch causes problems, check things like degrees and radians. Do a simple test and confirm the pipeline is working on two images, like I did with this gentleman for CLATCH:
komrad36/CLATCH#1

That way you can test rotation invariance, scale invariance, etc.

Also note that the CPU version necessarily handles scale sort of imperfectly; as I think I mentioned previously somewhere, it's great at handling various scales, i.e. at matching small scale features, or matching large scale features, but not good at matching small scales to large ones.

The CUDA version is a little better, but not great.

A really good solution is planned by tightly coupling the upcoming KORB to CLATCH such that they generate and share pre-computed interpolated scale spaces. That'll work great. Probably take a couple more weeks.

Thanks,
Kareem

@komrad36 komrad36 closed this as completed Dec 5, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants