You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm wondering how to transfer the colmap result into camera_path to render.
I have tried the script in colmap_to_json.py to transform the qvec and tvec into c2w matrix. R = qvec2rotmat(qvec) t = tvec.reshape([3,1]) w2c = np.concatenate([R, t], 1) w2c = np.concatenate([w2c, np.array([[0, 0, 0, 1]])], 0) c2w = np.linalg.inv(w2c) c2w[0:3, 1:3] *= -1 c2w = c2w[np.array([1, 0, 2, 3]), :] c2w[2, :] *= -1
And then write it into the camera_path.json.
just like
It seems the render result is different from expectation. And, the camera is in the wrong position.
Do anyone know that the correct conversion between colmap trajectory and NeRF render path?
The text was updated successfully, but these errors were encountered:
Ting-Wei-Chang626
changed the title
conversion between the colmap qvec and tvec and NeRF render json
conversion between the colmap qvec, tvec and NeRF render json
Jan 31, 2024
In my case, I use nerfstudio to visualize the result from Gaussian Splatting. The scale and origin of imported point cloud is same with the original colmap one.
Does nerfstudio change the original and scale while rendering the trained result?
I'm wondering how to transfer the colmap result into camera_path to render.
I have tried the script in colmap_to_json.py to transform the qvec and tvec into c2w matrix.
R = qvec2rotmat(qvec)
t = tvec.reshape([3,1])
w2c = np.concatenate([R, t], 1)
w2c = np.concatenate([w2c, np.array([[0, 0, 0, 1]])], 0)
c2w = np.linalg.inv(w2c)
c2w[0:3, 1:3] *= -1
c2w = c2w[np.array([1, 0, 2, 3]), :]
c2w[2, :] *= -1
And then write it into the camera_path.json.
![螢幕快照 2024-01-31 132425](https://private-user-images.githubusercontent.com/61952942/301048435-088cc08e-ca8e-437e-b324-1aa8655cc4cd.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA2ODEzOTUsIm5iZiI6MTcyMDY4MTA5NSwicGF0aCI6Ii82MTk1Mjk0Mi8zMDEwNDg0MzUtMDg4Y2MwOGUtY2E4ZS00MzdlLWIzMjQtMWFhODY1NWNjNGNkLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzExVDA2NTgxNVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTI0MmQ2MTI5N2Y0NDVmYjRmZTQzNjkyYmRjNWYxOWYzZmI5OTJkYWRkMmY4OGRhN2YwNTdiZTZlODJhODYwZGYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.dfC68lrtTC4OJVH8-d5PlTopzvCvXgAmP02aX7NMD2I)
just like
It seems the render result is different from expectation. And, the camera is in the wrong position.
Do anyone know that the correct conversion between colmap trajectory and NeRF render path?
The text was updated successfully, but these errors were encountered: