You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have camera positions and rotations from a camera alignment (4x4 transformation matrices). Visualizing them with open3d works fine. The following code produces the scene below with the object in the center of the cameras and the RGB-axis shows the origin of the scene.
Now I want to import those cameras into MeshLab for further processing. For that purpose I've written a script to create a MeshLab project file (.mlp). You can find the code in the question related repository, but it's not important for the issue.
Opening this generated project.mlp file misplaces the cameras, as you can see in the image below:
It seems as if the cameras are mirrored on the Z-axis and rotated by 180 degrees. Why does that happen?
Minimal reproduction
You can try it yourself by cloning this repository:
Scale cameras by opening the "Show Camera" toggle at the bottom right of the screen and setting "Camera Scale Method" to "Fixed Factor" and entering 0.005 as "Scale Factor"
Scroll out
Render ➝ Show Axis
What I've tried
I've tried to transform the cameras back to their initial positions and rotations like this:
fortransformation_matrixincamera_transformation_matrices:
# flip z valuetransformation_matrix[2, 3] *=-1# swap y and z rotationswap_y_and_z=np.array([[0, 0, 1],
[0, 1, 0],
[1, 0, 0]])
transformation_matrix[:3, :3] =np.matmul(
swap_y_and_z, transformation_matrix[:3, :3])
# rotate transformation_matrix 90 degrees about y axis of the camerarotate_90_around_y_axis=np.array([[math.cos(-math.pi/2), 0, math.sin(-math.pi/2)],
[0, 1, 0],
[-math.sin(-math.pi/2), 0, math.cos(-math.pi/2)]])
T=np.eye(4)
T[:3, :3] =rotate_90_around_y_axisT[:3, 3] =transformation_matrix[:3, 3] - \
np.matmul(rotate_90_around_y_axis, transformation_matrix[:3, 3])
transformation_matrix[:4, :4] =np.matmul(T, transformation_matrix)
The result in MeshLab looks really promising:
But when I look through the cameras, by clicking the "Show Current Raster Mode"-Button:
And switching the images on the right, the pictures are not aligned with the mesh. In fact you cannot see the mesh at all on most of the pictures. That doesn't make sense, since the cameras are all pointing towards the mesh.
You can try it yourself by running python failed_try.py and opening the generated project_failed_try.mlp file in MeshLab.
The text was updated successfully, but these errors were encountered:
Hi, I just don't know how to help you right now. I guess that this is a bug in the code, but I am not sure.
I did not write the code that manages cameras in vcg/meshlab, and it probably needs a proper review/refactoring, since we received several other reports about inconsistencies on this subject.
I have camera positions and rotations from a camera alignment (4x4 transformation matrices). Visualizing them with open3d works fine. The following code produces the scene below with the object in the center of the cameras and the RGB-axis shows the origin of the scene.
Now I want to import those cameras into MeshLab for further processing. For that purpose I've written a script to create a MeshLab project file (.mlp). You can find the code in the question related repository, but it's not important for the issue.
Opening this generated
project.mlp
file misplaces the cameras, as you can see in the image below:It seems as if the cameras are mirrored on the Z-axis and rotated by 180 degrees. Why does that happen?
Minimal reproduction
You can try it yourself by cloning this repository:
git clone https://github.com/flolu/meshlab-camera-transformation
conda env create -n meshlab-camera-transformation -f conda.yml
conda activate meshlab-camera-transformation
python visualize_cameras.py
(Visualize correct Open3D scene)python main.py
(Generate MeshLab project file)You can look at the cameras by following these instructions:
project.mlp
0.005
as "Scale Factor"What I've tried
I've tried to transform the cameras back to their initial positions and rotations like this:
The result in MeshLab looks really promising:
But when I look through the cameras, by clicking the "Show Current Raster Mode"-Button:
And switching the images on the right, the pictures are not aligned with the mesh. In fact you cannot see the mesh at all on most of the pictures. That doesn't make sense, since the cameras are all pointing towards the mesh.
You can try it yourself by running
python failed_try.py
and opening the generatedproject_failed_try.mlp
file in MeshLab.The text was updated successfully, but these errors were encountered: