You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not quite sure if I'm misunderstanding the threshold settings, so at https://github.com/dgtlmoon/ORB_LATCH_K2NN I have a pipeline of ORB / LATCH / K2NN, my results are that when I --search using fast-approx I was expecting that I would get more (or atleast some) descriptors that match that are not in the same set that I searched for.
My app scans images in a directory and appends all of their descriptors to a training.dat file for a K2NN search
I'm seeing that ONLY the same collection of descriptors in my query exists in the training and nothing else... even tho a lot of the images should have atleast a couple of similar descriptors.
also suspicious is that with fastApprox() all of the .t's (training vector index) is contiguous.. I would have expected that atleast some might be partial (threshold) matches
so hmm.. not sure how I would rank images by "similarity" (or by their K2NN ranking) here
The text was updated successfully, but these errors were encountered:
Hmm. tho in another case fast-approx gives no results whilst brute-force gives the kind of results im expecting.. so something i'm not understanding here
I don't currently have time to examine external code-bases, but I can say that fastApproxMatch is an experimental attempt to do just a single twiddle pass of multi-index hashing. It will only return guaranteed matches, but it will only find such matches if the query and training differ by less than (32 - threshold), so it could work well in a high-frame-rate VO application where each frame differs very little, but is not intended, and not likely to work, for a SfM application like yours, where thresholds must be high as matches will still differ a lot.
I suggest sticking to bruteMatch for your application.
If even bruteMatch causes problems, check things like degrees and radians. Do a simple test and confirm the pipeline is working on two images, like I did with this gentleman for CLATCH: komrad36/CLATCH#1
That way you can test rotation invariance, scale invariance, etc.
Also note that the CPU version necessarily handles scale sort of imperfectly; as I think I mentioned previously somewhere, it's great at handling various scales, i.e. at matching small scale features, or matching large scale features, but not good at matching small scales to large ones.
The CUDA version is a little better, but not great.
A really good solution is planned by tightly coupling the upcoming KORB to CLATCH such that they generate and share pre-computed interpolated scale spaces. That'll work great. Probably take a couple more weeks.
Not quite sure if I'm misunderstanding the threshold settings, so at https://github.com/dgtlmoon/ORB_LATCH_K2NN I have a pipeline of ORB / LATCH / K2NN, my results are that when I
--search
usingfast-approx
I was expecting that I would get more (or atleast some) descriptors that match that are not in the same set that I searched for.My app scans images in a directory and appends all of their descriptors to a
training.dat
file for a K2NN searchI'm seeing that ONLY the same collection of descriptors in my
query
exists in thetraining
and nothing else... even tho a lot of the images should have atleast a couple of similar descriptors.also suspicious is that with fastApprox() all of the .t's (
training vector index
) is contiguous.. I would have expected that atleast some might be partial (threshold) matchesso hmm.. not sure how I would rank images by "similarity" (or by their K2NN ranking) here
The text was updated successfully, but these errors were encountered: