You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello again!
I'm trying to learn rrt on descriptors extracted with netvlad + superpoint on gldv2 dataset, and i really can't just get close to your's results (i know that's it another networks for extraction, but really looks strange). My best metrics on rOxM/rOxH is 46/22 for top100 reranking samples. I'm not using scales, because they not extracted by superpoint, and looks like that all changes in model. If you have some ideas, why that can happened i'll be very glad to hear them!
The text was updated successfully, but these errors were encountered:
Sorry for the late reply. If you're comparing to the experiments with DELG descriptors, it makes sense as the comparison is not fair. The feature representation is important. It would be best if the descriptors are pre-trained with in-domain data (i.e. landmarks) and human labels. As far as I remember, Superpoint used a smaller visual backbone, and was pre-trained on a different dataset without human annotations. This may explain the performance gap.
Hello again!
I'm trying to learn rrt on descriptors extracted with netvlad + superpoint on gldv2 dataset, and i really can't just get close to your's results (i know that's it another networks for extraction, but really looks strange). My best metrics on rOxM/rOxH is 46/22 for top100 reranking samples. I'm not using scales, because they not extracted by superpoint, and looks like that all changes in model. If you have some ideas, why that can happened i'll be very glad to hear them!
The text was updated successfully, but these errors were encountered: