New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The different RR in test.py and eval.py #45
Comments
The definition of whether one pair of point cloud was successfully registered in test and eval stage is different. Check the code. |
RR in |
Thank you for your reply. I agree with that. Apart from the definition of RR, I also find that you remove consecutive frames (may following predator) in eval.py, which also makes the dataset more difficult, resulting in the reduction in RR from "test" to "eval". Is my understanding correct? |
Yes. The convention of skipping consecutive frames is from 3DMatch. |
Hi @qinzheng93, Thanks for your great work. As a beginner in this area, I just have one follow-up question, what is the main difference between test.py and eval.py, and why do we need both of them? It looks like they are doing similar things. |
And one more question is that according to my reading of the code for eval.py and test.py, they look like both evaluate the same dataset. Not sure if my observation is correct. |
Hello, thank you for you amazing work again.
When I evaluated on 3DMatch, I found the RR reported by test.py and eval.py are very different, what makes this difference?
The text was updated successfully, but these errors were encountered: