Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get_lidar_all #106

Closed
elhamAm opened this issue Jul 24, 2021 · 22 comments
Closed

get_lidar_all #106

elhamAm opened this issue Jul 24, 2021 · 22 comments

Comments

@elhamAm
Copy link

elhamAm commented Jul 24, 2021

Hello,

def get_lidar_all(self, offset_with_camera=np.array([0, 0, 0])):

The function get_lidar_all is not working. The camera does not turn during the 4 iterations. So the result of the readings is the same chair scene rotated 90 degrees, 4 times and patched together.
I am trying to reconstruct a 360 degree scene by transforming 3d streams to the global coordianate system and patching them together but nothing is working. Please help.

@fxia22
Copy link
Collaborator

fxia22 commented Jul 25, 2021

Hi @elhamAm, I cannot seem to reproduce your issue. I used the following script to generate 360 degree lidars by modifying igibson/example/demo/lidar_velodyne_example.py. Can you share a script that reproduce your issue or double-check if your example is different from this one?

Script:

from igibson.robots.turtlebot_robot import Turtlebot
from igibson.simulator import Simulator
from igibson.scenes.gibson_indoor_scene import StaticIndoorScene
from igibson.objects.ycb_object import YCBObject
from igibson.utils.utils import parse_config
from igibson.render.mesh_renderer.mesh_renderer_settings import MeshRendererSettings
import numpy as np
from igibson.render.profiler import Profiler
import igibson
import os
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt

def main():
    config = parse_config(os.path.join(igibson.example_config_path, 'turtlebot_demo.yaml'))
    settings = MeshRendererSettings()
    s = Simulator(mode='gui',
                  image_width=256,
                  image_height=256,
                  rendering_settings=settings)

    scene = StaticIndoorScene('Rs',
                              build_graph=True,
                              pybullet_load_texture=True)
    s.import_scene(scene)
    turtlebot = Turtlebot(config)
    s.import_robot(turtlebot)

    
    turtlebot.apply_action([0.1, -0.1])
    s.step()
    lidar = s.renderer.get_lidar_all()
    print(lidar.shape)
    fig = plt.figure()
    ax = Axes3D(fig)
    ax.scatter(lidar[:,0], lidar[:,2], lidar[:,1], s=3)
    plt.show()

    s.disconnect()


if __name__ == '__main__':
    main()

output:

example_lidar_output

@elhamAm
Copy link
Author

elhamAm commented Jul 27, 2021

Yes it's working for me too, sorry and thank you very much! I must've changed a value in the rotation matrices by accident.
Can you explain what r2 and r3 are in the get_lidar_all function(

iGibson/igibson/render/mesh_renderer/mesh_renderer_cpu.py

Line 1390 in 5f8d253
)? One is a rotation matrix by the z axis for pi/2 and what about the other one? What is the relationship between them? They don't seem to be the inverse of each other.

@fxia22
Copy link
Collaborator

fxia22 commented Jul 27, 2021

You are correct that r2 rotates the camera pi/2 wrt z axis every time. The values returned by get_lidar_from_depth is in camera frame, which is x-right, y-up, z-behind. So r3 rotates the reading w.r.t y-axis and concatenates them together.

@elhamAm
Copy link
Author

elhamAm commented Jul 27, 2021

Thank you very much! What if we are doing translation and rotation on the camera? How would the transformation matrix look then in regards to the additional translation of the camera?

@elhamAm
Copy link
Author

elhamAm commented Jul 28, 2021

Would the get_lidar_all() with translation be something like this ?
I don't know why the version below is not working
def get_lidar_all(self, offset_with_camera=np.array([0, 0, 0])):
"""
Get complete LiDAR readings by patching together partial ones
:return: complete 360 degree LiDAR readings
"""
for instance in self.instances:
if isinstance(instance, Robot):
camera_pos = instance.robot.eyes.get_position()
orn = instance.robot.eyes.get_orientation()
print("orn: ", orn)
mat = quat2rotmat(xyzw2wxyz(orn))[:3, :3]
view_direction = mat.dot(np.array([1, 0, 0]))
self.set_camera(camera_pos, camera_pos +
view_direction, [0, 0, 1])

    original_fov = self.vertical_fov
    self.set_fov(90)
    lidar_readings = []
    view_direction = np.array([1, 0, 0])
    div = 128
    r2 = np.array(
        [[np.cos(-np.pi / div), -np.sin(-np.pi / div), 0], [np.sin(-np.pi / div), np.cos(-np.pi / div), 0], [0, 0, 1]])
    r3 = np.array(
        [[np.cos(-np.pi / div), 0, -np.sin(-np.pi / div)], [0, 1, 0],  [np.sin(-np.pi / div), 0, np.cos(-np.pi / div)]])
    print("the multiplication: ", r3.dot(r2))
    transformatiom_matrix = np.eye(3)
    T3 = np.array([0.0, 0.0, 0.0])

    for i in range(256):
        print("camera position: ", self.camera)
        print("view direction: ", view_direction)
        self.set_camera(np.array(self.camera) + offset_with_camera,
                        np.array(self.camera) + offset_with_camera + view_direction, [0, 0, 1])
        lidar_one_view = self.get_lidar_from_depth()
        lidar_readings.append(lidar_one_view.dot(transformatiom_matrix) + T3)
        geomTrans = trimesh.PointCloud(lidar_one_view.dot(transformatiom_matrix))
        print("i: ",i)
        geomTrans.export("./read33/reading_"+ str(i) +".ply")
        view_direction = r2.dot(view_direction)
        T2 = np.array([+0.01, 0, 0])
        self.camera -= T2
        T3 -= np.array([-0.1, 0.0, 0.0])
        transformatiom_matrix = r3.dot(transformatiom_matrix)

@fxia22
Copy link
Collaborator

fxia22 commented Aug 3, 2021

@elhamAm you are right that previous code doesn't handle rotation correctly. It only render the lidar as if the scanner is parallel to the floor. I handled it in a future update and yet need to test it more before emerging to the upstream. I also got ride of the need to use multiple transformation matrix and it should look cleaner and easier to understand.

You can use the code here:
fxia22@c84eb73

@fxia22
Copy link
Collaborator

fxia22 commented Aug 3, 2021

Same as before, you can use igibson/examples/demo/lidar_velodyne_example.py to test and visualize.

@elhamAm
Copy link
Author

elhamAm commented Aug 5, 2021

Hello @fxia22 ,
thanks for your reply. I am trying to run the panorama2 branch from your repo however I don't have this file : data/ig_dataset/metadata/non_sampleable_categories.txt'. What should I do?

File "/home/elham/fxia/gibson_demos/igibson/object_states/init.py", line 1, in
from igibson.object_states.aabb import AABB
File "/home/elham/fxia/gibson_demos/igibson/object_states/aabb.py", line 3, in
from igibson.external.pybullet_tools.utils import aabb_union, get_aabb, get_all_links
File "/home/elham/fxia/gibson_demos/igibson/external/pybullet_tools/utils.py", line 25, in
from igibson.utils.constants import OccupancyGridState
File "/home/elham/fxia/gibson_demos/igibson/utils/constants.py", line 37, in
with open(os.path.join(igibson.ig_dataset_path, "metadata/non_sampleable_categories.txt")) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/elham/fxia/gibson_demos/igibson/data/ig_dataset/metadata/non_sampleable_categories.txt'

@elhamAm
Copy link
Author

elhamAm commented Aug 5, 2021

And then if I replace non_sampleable_categories.txt in utils/constants.py with categories.txt, I get the below errors:

exec(code, run_globals)
File "/home/elham/fxia/gibson_demos/igibson/examples/demo/lidar_velodyne_example.py", line 43, in
main()
File "/home/elham/fxia/gibson_demos/igibson/examples/demo/lidar_velodyne_example.py", line 20, in main
s = Simulator(mode="headless", image_width=256, image_height=256, rendering_settings=settings)
File "/home/elham/fxia/gibson_demos/igibson/simulator.py", line 161, in init
self.load()
File "/home/elham/fxia/gibson_demos/igibson/simulator.py", line 237, in load
simulator=self,
File "/home/elham/fxia/gibson_demos/igibson/render/mesh_renderer/mesh_renderer_cpu.py", line 285, in init
self.text_manager.gen_text_fbo()
File "/home/elham/fxia/gibson_demos/igibson/render/mesh_renderer/text.py", line 64, in gen_text_fbo
self.FBO, self.render_tex = self.renderer.r.genTextFramebuffer()
AttributeError: 'igibson.render.mesh_renderer.EGLRendererContext.EG' object has no attribute 'genTextFramebuffer'

@fxia22
Copy link
Collaborator

fxia22 commented Aug 5, 2021

Hi @elhamAm you don't have to use that branch, you can just copy get_lidar_all and get_lidar_from_depth to your working branch.

@fxia22
Copy link
Collaborator

fxia22 commented Aug 5, 2021

Actually @elhamAm you can just pull master, since those changes are just applied to master branch.

@elhamAm
Copy link
Author

elhamAm commented Aug 5, 2021

thank you, why is the rotation matrix 4 x 4 instead of 3 x 3? Would a translation vector also be 1 x 4 ?

@fxia22
Copy link
Collaborator

fxia22 commented Aug 5, 2021

The transformation matrix is usually 4x4 to have both rotation and translation. If the translation is 0, it becomes a rotation matrix. I used it because it is convenient to multiply with the view matrix of the renderer.

I guess you can read about these things further: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
http://www.songho.ca/opengl/gl_transform.html

@elhamAm
Copy link
Author

elhamAm commented Aug 5, 2021

Thank you again.
So here below is your code with a bit of change so there is no need to take just the rotation part of the transformation matrix. And I also change the r so that it also applies translation and the resulting 3d mesh does not look good. There are more than 3 chairs as there should be. You can find an image below.

`def get_lidar_all(self, offset_with_camera=np.array([0, 0, 0])):
for instance in self.instances:
if isinstance(instance, Robot):
camera_pos = instance.robot.eyes.get_position() + offset_with_camera
orn = instance.robot.eyes.get_orientation()
mat = quat2rotmat(xyzw2wxyz(orn))[:3, :3]
view_direction = mat.dot(np.array([1, 0, 0]))
self.set_camera(camera_pos, camera_pos + view_direction, [0, 0, 1])

    original_fov = self.vertical_fov
    self.set_fov(90

)
lidar_readings = []
rotAngle = -np.pi / 4
n = 8
r = np.array(
[
[np.cos(rotAngle), 0,-np.sin(rotAngle),0.5],
[0, 1, 0, 0],
[np.sin(rotAngle), 0, np.cos(rotAngle), 0],
[0, 0, 0, 1],
]
)

    transformation_matrix = np.eye(4)
    for i in range(n):
        lidar_one_view = self.get_lidar_from_depth()
        print("shape: ", lidar_one_view.shape)
        col = np.ones(lidar_one_view.shape[0])
        lidar_one_viewNew = np.c_[lidar_one_view, col]
        lidar_readings.append(lidar_one_viewNew.dot(transformation_matrix)[:, :-1])

        #geomTrans = trimesh.PointCloud(lidar_one_viewNew.dot(transformation_matrix)[:, :-1])
        #print("i: ",i)
        #geomTrans.export("./read33/reading_"+ str(i) +".ply")

        self.V = r.dot(self.V)
        transformation_matrix = np.linalg.inv(r).dot(transformation_matrix)


    lidar_readings = np.concatenate(lidar_readings, axis=0)
    # currently, the lidar scan is in camera frame (z forward, x right, y up)
    # it seems more intuitive to change it to (z up, x right, y forward)
    lidar_readings = lidar_readings.dot(np.array([[1, 0, 0], [0, 0, 1], [0, 1, 0]]))

    self.set_fov(original_fov)
    return lidar_readings`

Screenshot from 2021-08-05 22-47-28

@elhamAm
Copy link
Author

elhamAm commented Aug 5, 2021

Screenshot from 2021-08-05 22-55-45

@fxia22
Copy link
Collaborator

fxia22 commented Aug 5, 2021

I think when you try to overlap multiple scans, they don't seem to be aligned. You would need to convert the lidar scans from camera frame to world frame and then put them together.

@elhamAm
Copy link
Author

elhamAm commented Aug 5, 2021

Then how come for just rotation this was not an issue?

@elhamAm
Copy link
Author

elhamAm commented Aug 6, 2021

Also, what is self.V?

@elhamAm
Copy link
Author

elhamAm commented Aug 6, 2021

It's working now, thank you very very much for your help :) There was no need for frame change

@fxia22
Copy link
Collaborator

fxia22 commented Aug 6, 2021

Can you post your solution here? Thanks.

@elhamAm
Copy link
Author

elhamAm commented Aug 6, 2021

def get_lidar_from_depth(self):
        """
        Get partial LiDAR readings from depth sensors with limited FOV
        :return: partial LiDAR readings with limited FOV
        """
        lidar_readings = self.render(modes=("3d"))[0]
        lidar_readings = lidar_readings[self.x_samples, self.y_samples, :3]
        dist = np.linalg.norm(lidar_readings, axis=1)
        lidar_readings = lidar_readings[dist > 0]
        lidar_readings[:, 2] = -lidar_readings[:, 2]  # make z pointing out
        return lidar_readings

    def get_lidar_all(self, offset_with_camera=np.array([0, 0, 0])):
        """
        Get complete LiDAR readings by patching together partial ones
        :param offset_with_camera: optionally place the lidar scanner
        with an offset to the camera
        :return: complete 360 degree LiDAR readings
        """
        for instance in self.instances:
            if isinstance(instance, Robot):
                camera_pos = instance.robot.eyes.get_position() + offset_with_camera
                orn = instance.robot.eyes.get_orientation()
                mat = quat2rotmat(xyzw2wxyz(orn))[:3, :3]
                view_direction = mat.dot(np.array([1, 0, 0]))
                self.set_camera(camera_pos, camera_pos + view_direction, [0, 0, 1])

        original_fov = self.vertical_fov
        self.set_fov(90)
        lidar_readings = []
        rotAngle = -np.pi / 4
        n = 8
        r = np.array(
            [
                [np.cos(rotAngle), 0,-np.sin(rotAngle),0],
                [0, 1, 0, 0],
                [np.sin(rotAngle), 0, np.cos(rotAngle), 0],
                [0, 0, 0, 1],
            ]
        )
        #the middle one is up
        t = np.array([0, 0, 0.2])
        trlate = np.array([0.0, 0.0, 0.0])
        transformation_matrix = np.eye(4)
        s=0

        for i in range(n):
            s = n
            lidar_one_view = self.get_lidar_from_depth()
            print("shape: ", lidar_one_view.shape)
            col = np.ones(lidar_one_view.shape[0])
            lidar_one_viewNew = np.c_[lidar_one_view, col]
            lidar_readings.append(lidar_one_viewNew.dot(transformation_matrix)[:, :-1])

            geomTrans = trimesh.PointCloud(lidar_one_viewNew.dot(transformation_matrix)[:, :-1])
            print("i: ",i)
            geomTrans.export("./read33/reading_"+ str(i) +".ply")
            print("self.V: ", self.V)
            self.V = r.dot(self.V)
            transformation_matrix = np.linalg.inv(r).dot(transformation_matrix)

        t1 = np.array(
            [
                [0, 0, 0, 0],
                [0, 0, 0, 0],
                [0, 0, 0, -0.2],
                [0, 0, 0, 0],
            ]
        )  

        for i in range(n):
            lidar_one_view = self.get_lidar_from_depth()
            print("shape: ", lidar_one_view.shape)
            lidar_one_view = lidar_one_view + trlate


            col = np.ones(lidar_one_view.shape[0])
            lidar_one_viewNew = np.c_[lidar_one_view, col]
            lidar_readings.append(lidar_one_viewNew.dot(transformation_matrix)[:, :-1])

            geomTrans = trimesh.PointCloud(lidar_one_viewNew.dot(transformation_matrix)[:, :-1])
            print("i: ",i)
            geomTrans.export("./read33/reading_"+ str(s+i) +".ply")

            #geomTrans = trimesh.PointCloud(lidar_one_view + trlate)
            #print("i: ",i)
            #geomTrans.export("./read33/reading_"+ str(4+i) +".ply")
            print("V: ", self.V)

            self.V += t1
            trlate -= t 





        lidar_readings = np.concatenate(lidar_readings, axis=0)
        # currently, the lidar scan is in camera frame (z forward, x right, y up)
        # it seems more intuitive to change it to (z up, x right, y forward)
        lidar_readings = lidar_readings.dot(np.array([[1, 0, 0], [0, 0, 1], [0, 1, 0]]))

        self.set_fov(original_fov)
        return lidar_readings

@fxia22
Copy link
Collaborator

fxia22 commented Aug 6, 2021

Thanks. Closing this issue since the problem is solved.

@fxia22 fxia22 closed this as completed Aug 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants