You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently performing inference on BEVFusion using a different dataset, but there seems to be an issue with the lidar2image matrix I've computed. I'm interested in understanding how the lidar2image matrix is specifically calculated in the example provided with the nuscenes dataset. Could you please guide me through the process?
The text was updated successfully, but these errors were encountered:
I was recently able to solve this issue. lidar2image is the Lidar to image reprojection matrix. Basically what this means is that it projects lidar points into image pixel coordinates. You can follow this link and read more about it on Matlab's website. Now come the most important part, how should you calculate this matrix. First, you need the camera intrinsic matrix. This will give you the camera projection matrix from pixel to world. Second, you need the camera2lidar/lidar2camera extrinsic matrix. The projection matrix is calculated as: P = K @ T where K is the intrinsic matrix and T is lidar2camera extrinsic matrix. Hope this will provide you with enough information!
P.S. I'm not too sure about the direction of extrinsic matrix i.e lidar2camera or camera2lidar. Please experiment with the Nuscene dataset and see how they add up!
I'm currently performing inference on BEVFusion using a different dataset, but there seems to be an issue with the lidar2image matrix I've computed. I'm interested in understanding how the lidar2image matrix is specifically calculated in the example provided with the nuscenes dataset. Could you please guide me through the process?
The text was updated successfully, but these errors were encountered: