You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm thinking these days a problem and hope your help: Is the vocabulary too big for practice?
As you see, the vocabulary has 1 million items. However every frame, on average has only about 1000 feature points. I can't image any statistic sense by projecting from 1000 to 1 million. Most similar features will miss matching just cause of this huge sparsity (I didn't take experiment, just some feeling).
Besides, compare 2 vectors both of 1 million is not that fast.
So, my question is, why not keep the vocabulary little (about 1000 words)? Then even a brute force search on 10000 keyframes takes only 1000*10000 (just 10 times than comparing 2 vectors of 1 million).
Any idea is welcome. Thanks!
The text was updated successfully, but these errors were encountered:
DBoW2 method is not an exhaustive searching method. It's somehow like a k-d tree searching. The searching efficiency is pretty good. The searching part takes about only 5ms for average and less than 40ms for maximum in the author's experiment. The author provides a huge data base so that everyone can use the same data base instead of training his own for each scene. And of course, if you want to speed up or your case is special or simple, you might train your own database.
Hello everyone,
I'm thinking these days a problem and hope your help: Is the vocabulary too big for practice?
As you see, the vocabulary has 1 million items. However every frame, on average has only about 1000 feature points. I can't image any statistic sense by projecting from 1000 to 1 million. Most similar features will miss matching just cause of this huge sparsity (I didn't take experiment, just some feeling).
Besides, compare 2 vectors both of 1 million is not that fast.
So, my question is, why not keep the vocabulary little (about 1000 words)? Then even a brute force search on 10000 keyframes takes only 1000*10000 (just 10 times than comparing 2 vectors of 1 million).
Any idea is welcome. Thanks!
The text was updated successfully, but these errors were encountered: