-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to render results with weak perspective camera #344
Comments
@kaufManu |
Thanks for your reply. I agree that if I were to render this with a perspective camera, that we should expect some misalignment. However, in the image above, I'm using the weak-perspective camera model, not a perspective camera, so I think in theory it should be able to overlay exactly with what your code produces (which is the dark blue model in the above image). Like you explained in this post I'm using |
The code I'm using to render this image is now online, in case that helps: https://github.com/eth-ait/aitviewer/blob/main/examples/load_ROMP.py |
TO @kaufManu ,
To @Arthur151 As stated above, it seems that when you evaluate ROMP on AGORA you use a "fake" focal length (from FOV) and estimated the camera pose with that said focal length. Why didn't you just use the same weak perspective camera and cam_trans as is in training phase? The "fake" focal length and trans should work fine but it can still be different from the ones from training, is there any specific consideration or simply for convenience? |
@tijiang13 Good question! Glad that you notice this point. |
Thanks for all your replies. With the help of @tijiang13 we were able to figure this out. The code here now overlais perfectly with what ROMP renders, as can be seen from the following image (the yellow outline is what is rendered on top of the ROMP output). For future reference, here are some comments and clarifications:
|
@kaufManu Yes, you are right. Besides, it would be great if we can also visualize the camera motion. Because you know that there are some works that also consider the camera motion, like SPEC and GLAMR. Best, |
@Arthur151 Thanks for your kind words! Yes I agree, we are actually working on integrating camera motion right now. I'll see if I can add an example from SPEC or GLAMR to our repo with the next release. |
FYI, visualizing camera motions is now possible, cf. GLAMR example here. |
Thanks! @kaufManu |
@Arthur151 Amazing! Please let me know (probably best via Github issues) if you encounter any problems or if you have feature requests that would make things easier for you! |
Thanks a lot for this great and easy-to-use repo!
I'm trying to render the results using the weak perspective camera model. My question relates to these issues:
However none of these issues gave me the answer I was looking for. I am using the weak perspective camera parameters stored in
cam
and as suggested in this issue I multiply them with 2. I also pad the image to be square as mentioned here. I then convert the weak perspective camera model to a projection matrix the same way I used to do it for VIBE, which worked well there. However, for the ROMP output I'm still getting a slight misalignment, as you can see in the following screenshot. The light model is what I am rendering and the blue model in the background is the visualization output from ROMP. I think it's because I should somehow account forcam_trans
but I don't know how exactly. Can you help me with this?The text was updated successfully, but these errors were encountered: