Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TotalCapture Distortion #3

Open
yohanshin opened this issue Oct 6, 2021 · 5 comments
Open

TotalCapture Distortion #3

yohanshin opened this issue Oct 6, 2021 · 5 comments

Comments

@yohanshin
Copy link

Hi, I wonder how you dealt with the distortion of cameras.

The given distortion parameter is too small, so using 1st order radial distortion, that parameter seems like the system is undistorted at all.

I wonder if you suffered the same issue, and if yes, how did you deal with that? Doesn't it affect a lot while reconstructing 3D pose from given 2D points?

Thank you very much in advance!

@zhezh
Copy link
Owner

zhezh commented Oct 8, 2021

hi @yohanshin we have observed the same. The distortion is small, you can either use it or not.

For our work, we did not use the distortion for fundamental matrix calculation

@yohanshin
Copy link
Author

Dear @zhezh , Thank you very much for your explanation. When I project given 3D ground-truth onto each camera, and I can see that 3D->2D projection does not align well with the original image. I think there might be some miss-calibration of the dataset or maybe it is just me doing a wrong projection?

If you also found this issue, I believe your works, reconstructing 3D keypoints from multi-view may suffer a lot from this since detected 2D keypoints may not recover the accurate 3D triangulation. Or was it okay since you used PSM?

@zhezh
Copy link
Owner

zhezh commented Oct 14, 2021

@yohanshin Yes, in some sequences there is misalignment. We have contacted the TotalCapture authors, the acknowledged it and have not a method to complement it.

However, if your method is improved and fix cases of large mpjpe, you will still see gain in the mpjpe metric despite the misalignments. This is to say that mpjpe gain is consistent with better model.

The similar misalignment also happens in Human3.6M.

In learnable triangulation paper, they calculate relative mpjpe, it is also applicable.

@yohanshin
Copy link
Author

@zhezh Thanks for your detailed perspective on this problem. I also used Human3.6M, but I think this misalignment issue is not that much severe at that dataset, is it? I agree that relative MPJPE will help to somehow solve the misalignment issue in terms of evaluation, but not sure if the given calibration is incorrect, would it be still reasonable to use volumetric aggregation from learnable triangulation or cross-view fusion from your work.

I will try to figure this out and thank you so much for providing us with this preprocessing tool-box! It is super helpful.

@zhezh
Copy link
Owner

zhezh commented Oct 21, 2021

@yohanshin

  • misalignment is smaller in Human3.6M than TotalCapture
  • volumetric and crossview fusion are both reasonable when misalignments happened in training data. The supervision signals for volumetric method and crossview fusion are 3D and 2D coordinates respectively. The misalignment will be averaged across the whole dataset while training. It could result in some offset in final pose predictions but will not lead to corrupted results.

I assume it mainly affects the bias but not the weight for the conv or linear network (just my hypothesis).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants