Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about Evaluation #48

Closed
AreteQin opened this issue May 10, 2020 · 4 comments
Closed

Question about Evaluation #48

AreteQin opened this issue May 10, 2020 · 4 comments

Comments

@AreteQin
Copy link

Hello there,

I met a question while I was evaluating my result.

I ran the test datasets from this website "https://www.eth3d.net/slam_datasets" and got my results.

Then I used "evaluate_ate.py" to evaluate my result. I added the addresses of my result file and imu file in the command. However, the ATE RMSE is about 0.98 m which is obviously incorrect.
Screenshot from 2020-05-10 22-37-21

I have two questions:

  1. There are 4 files named imu.txt in each dataset, what is the difference?

  2. I found an option named offset in "evaluate_ate.py", but I do not understand how to use it, I think probably this is what I should use to correct my evaluation.

Sorry for bothering you if there is some relative information I missed.

Thanks for your time, I am looking forward to your response.

Best regards,
Arete

@puzzlepaint
Copy link
Collaborator

  1. See https://www.eth3d.net/slam_documentation#format-of-imu-data . There are different cameras on the camera rig, and each camera contains an IMU. Each file contains the data from one IMU.
  2. Which evaluation script do you use, the one from the TUM-RGBD dataset? The 'official' evaluation program would be this one: https://github.com/ETH3D/slam-evaluation . Also, I don't think that the imu.txt file is supposed to be used here, since this (probably) should be the files containing the trajectories, not the IMU raw data.

@AreteQin
Copy link
Author

Thanks a lot for your response.

I can find the ground truth files for training datasets and synthetic datasets. But I cannot find any ground_truth.txt in the test datasets. "https://www.eth3d.net/slam_datasets"
Screenshot from 2020-05-16 12-02-43
1

Sorry for bothering you if there is some relative information I missed.

Thanks for your time, I am looking forward to your response.

@puzzlepaint
Copy link
Collaborator

That is actually the whole point of test datasets: Their ground truth is private and is only used for evaluation of results that are submitted to eth3d.net. This way, a finished SLAM system can be tested on these datasets while ensuring that no specific parameter tuning etc. has been done on those datasets that would artificially improve its results.

@AreteQin
Copy link
Author

Thank you so much for your response, I understood.

I will close this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants