Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the camera rotation for panorama images? #9

Closed
kimren227 opened this issue Mar 11, 2020 · 6 comments
Closed

What is the camera rotation for panorama images? #9

kimren227 opened this issue Mar 11, 2020 · 6 comments
Labels
question Further information is requested

Comments

@kimren227
Copy link

Hi,

First of all, I want to thank you for this great dataset.
I try to generate point clouds based on the panorama images, however, only camera xyz location are provided. I'm wondering where can I find the camera rotation parameters?
If I assume the camera has the same rotation, the point clouds generated are miss stitched.

Thanks,

Daxuan

image

@bertjiazheng bertjiazheng added the question Further information is requested label Mar 11, 2020
@bertjiazheng
Copy link
Owner

Hi Daxuan,

Thanks for your interest in our dataset. The direction of the camera in the panorama is always along the negative y-axis. Please make sure your alignment process is correct.

Best,
Jia

@kimren227
Copy link
Author

Hi Jia,

Thanks for the reply and suggestion.
I've found the issue. I made a mistake when converting an individual image to a point cloud.

Daxuan

@micaeltchapmi
Copy link

Hi Daxuan,

Hope you are doing good. Can you help me understand how you generated this point cloud from the panorama? Specifically, how did you use the camera parameters to get the point cloud given that only the camera location is provided for the panoramas.

Also, I am trying to generate a point cloud from the perspective images but my point cloud is
distorted as shown below. The point cloud below is obtained from scene: 00000 room_id: 485142 and position_id: 0 and I used the function rbgd_to_world together with the camera intrinsics and extrinsics to get the point cloud. Any help will be greatly appreciated.

Screen Shot 2020-04-07 at 12 59 10 PM

Thanks!

Micael

@kimren227
Copy link
Author

Hi Micael,

I'm using the panorama Depth images to generate the point cloud. It's quite simple. You can take the image center as the polar coordinate's 0,0, and evenly divide the width to 360 degrees and height to 180 degrees, then apply some trigonometry to convert it into cartesian coordinate.

I get the below code from my old backups and don't know if it contains bugs since I do not have access to my school computer due to the COVID19 issue... Feel free to try it and let me know if you run into any problem.

`import cv2
import math
import numpy as np
import open3d
import os
from sklearn.preprocessing import normalize

NUM_SECTIONS = -1

class PointCloudReader():

def __init__(self, path, resolution="full", random_level=0, generate_color=False, generate_normal=False):
    self.path = path
    self.random_level = random_level
    self.resolution = resolution
    self.generate_color = generate_color
    self.generate_normal = generate_normal
    sections = [p for p in os.listdir(os.path.join(path, "2D_rendering"))]
    self.depth_paths = [os.path.join(*[path, "2D_rendering", p, "panorama", self.resolution, "depth.png"]) for p in sections][:NUM_SECTIONS]
    self.rgb_paths = [os.path.join(*[path, "2D_rendering", p, "panorama", self.resolution, "rgb_coldlight.png"]) for p in sections][:NUM_SECTIONS]
    self.normal_paths = [os.path.join(*[path, "2D_rendering", p, "panorama", self.resolution, "normal.png"]) for p in sections][:NUM_SECTIONS]
    self.camera_paths = [os.path.join(*[path, "2D_rendering", p, "panorama", "camera_xyz.txt"]) for p in sections][:NUM_SECTIONS]
    self.camera_centers = self.read_camera_center()
    self.point_cloud = self.generate_point_cloud(self.random_level, color=self.generate_color, normal=self.generate_normal)

def read_camera_center(self):
    camera_centers = []
    print(self.camera_paths)
    print(self.depth_paths)
    for i in range(len(self.camera_paths)):
        with open(self.camera_paths[i], 'r') as f:
            line = f.readline()
        center = list(map(float, line.strip().split(" ")))
        camera_centers.append(np.asarray([center[0], center[1], center[2]]))
        print(camera_centers)
    return camera_centers

def generate_point_cloud(self, random_level=0, color=False, normal=False):
    coords = []
    colors = []
    normals = []
    points = {}
    # Getting Coordinates
    for i in range(len(self.depth_paths)):
        print(self.depth_paths[i])
        depth_img = cv2.imread(self.depth_paths[i], cv2.IMREAD_ANYDEPTH)
        x_tick = 180.0/depth_img.shape[0]
        y_tick = 360.0/depth_img.shape[1]
        for x in range(depth_img.shape[0]):
            for y in range(depth_img.shape[1]):
                # need 90 - -09
                alpha = 90 - (x * x_tick)
                beta = y * y_tick -180
                depth = depth_img[x,y] + np.random.random()*random_level
                z_offset = depth*np.sin(np.deg2rad(alpha))
                xy_offset = depth*np.cos(np.deg2rad(alpha))
                x_offset = xy_offset * np.sin(np.deg2rad(beta))
                y_offset = xy_offset * np.cos(np.deg2rad(beta))
                point = np.asarray([x_offset, y_offset, z_offset])
                coords.append(point + self.camera_centers[i])
    points['coords'] = np.asarray(coords)
    if color:
        # Getting RGB color
        for i in range(len(self.rgb_paths)):
            print(self.rgb_paths[i])
            rgb_img = cv2.imread(self.rgb_paths[i])
            for x in range(rgb_img.shape[0]):
                for y in range(rgb_img.shape[1]):
                    colors.append(rgb_img[x, y])
        points['colors'] = np.asarray(colors)/255.0
    if normal:
        # Getting Normal
        for i in range(len(self.normal_paths)):
            print(self.normal_paths[i])
            normal_img = cv2.imread(self.normal_paths[i])
            for x in range(normal_img.shape[0]):
                for y in range(normal_img.shape[1]):
                    normals.append(normalize(normal_img[x, y].reshape(-1, 1)).ravel())
        points['normals'] = normals
    return points

def get_point_cloud(self):
    return self.point_cloud

def visualize(self):
    pcd = open3d.geometry.PointCloud()
    pcd.points = open3d.utility.Vector3dVector(self.point_cloud['coords'])
    if self.generate_normal:
        pcd.normals = open3d.utility.Vector3dVector(self.point_cloud['normals'])
    if self.generate_color:
        pcd.colors = open3d.utility.Vector3dVector(self.point_cloud['colors'])
    open3d.visualization.draw_geometries([pcd])

def export_ply(self, path):
    '''
    ply
    format ascii 1.0
    comment Mars model by Paul Bourke
    element vertex 259200
    property float x
    property float y
    property float z
    property uchar r
    property uchar g
    property uchar b
    property float nx
    property float ny
    property float nz
    end_header
    '''
    with open(path, "w") as f:
        f.write("ply\n")
        f.write("format ascii 1.0\n")
        f.write("element vertex %d\n" % self.point_cloud['coords'].shape[0])
        f.write("property float x\n")
        f.write("property float y\n")
        f.write("property float z\n")
        if self.generate_color:
            f.write("property uchar red\n")
            f.write("property uchar green\n")
            f.write("property uchar blue\n")
        if self.generate_normal:
            f.write("property float nx\n")
            f.write("property float ny\n")
            f.write("property float nz\n")
        f.write("end_header\n")
        for i in range(self.point_cloud['coords'].shape[0]):
            normal = []
            color = []
            coord = self.point_cloud['coords'][i].tolist()
            if self.generate_color:
                color = list(map(int, (self.point_cloud['colors'][i]*255).tolist()))
            if self.generate_normal:
                normal = self.point_cloud['normals'][i].tolist()
            data = coord + color + normal
            f.write(" ".join(list(map(str,data)))+'\n')

if name == "main":
scene_path = r"D:\Structured3D"
scenes = [os.path.join(scene_path, i) for i in os.listdir(scene_path)]
for scene in scenes:
filename = os.path.basename(scene)+".ply"
print(filename)
reader = PointCloudReader(scene, random_level=10, generate_color=True, generate_normal=False)
# reader.visualize()
reader.export_ply(os.path.join(r"D:\PointClouds", filename))`

Hope it helps,

Daxuan

@micaeltchapmi
Copy link

Hi Daxuan,

I was able generate the point cloud using your code. Thanks a lot for the explanation and the code. I really appreciate your help.

Best,

Micael

@RauchLukas
Copy link

Hi

I just replicated @kimren227 's your code and was wandering, why there are these strong artifacts in the line of sight to the windows. I wasn't expecting that in a synthetic dataset.

Is this due to any image smoothing in the depth image ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants