You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. I've been exploring cameratransform to understand how to perform backprojections on a single image. I'm not obtaining the correct backprojected penguin points nor horizon line as shown in the documentation. I know the backprojected points are incorrect because backprojected horizon line is at the bottom of the image and backprojected landmarks are outside of the field of view:
My information fit image:
My camera trace from metropolis algorithm:
The only changes I made to the original script are the addition of a line for CameraImage2.jpg image reading (via cv2.imread, and I also tried plt.imread) and replacing the matplotlib backend to one that works for me (via matplotlib.use('TkAgg')). A difference I noticed between the script and the documentation is that metropolis is set by default to run for 1,000 iterations vs 10,000, but even changing 1e3 to 1e4 to match the value in the documentation snippet I get the same wrong backprojections in the image.
I'm also confused as to why in this test first camera.metropolis(...) is used to estimate the camera extrinsic parameters but then camera.fit(...) uses the same initial values to obtain a completely different estimated values. The values with higher probability given by metropolis are the ones supposed to be the estimates for [elevation_m, rot_deg, tilt_deg, heading_deg] and with these fill in the extrinsic parameter matrices and be used for backprojection, right?
The text was updated successfully, but these errors were encountered:
Hi. I've been exploring cameratransform to understand how to perform backprojections on a single image. I'm not obtaining the correct backprojected penguin points nor horizon line as shown in the documentation. I know the backprojected points are incorrect because backprojected horizon line is at the bottom of the image and backprojected landmarks are outside of the field of view:
My information fit image:
My camera trace from metropolis algorithm:
The only changes I made to the original script are the addition of a line for CameraImage2.jpg image reading (via cv2.imread, and I also tried plt.imread) and replacing the matplotlib backend to one that works for me (via matplotlib.use('TkAgg')). A difference I noticed between the script and the documentation is that metropolis is set by default to run for 1,000 iterations vs 10,000, but even changing 1e3 to 1e4 to match the value in the documentation snippet I get the same wrong backprojections in the image.
I'm also confused as to why in this test first camera.metropolis(...) is used to estimate the camera extrinsic parameters but then camera.fit(...) uses the same initial values to obtain a completely different estimated values. The values with higher probability given by metropolis are the ones supposed to be the estimates for [elevation_m, rot_deg, tilt_deg, heading_deg] and with these fill in the extrinsic parameter matrices and be used for backprojection, right?
The text was updated successfully, but these errors were encountered: