Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In the ASE dataset, how do I get the labels corresponding to the instances, and how do I convert them to point clouds based on depth images and rgb images #1

Closed
zhangzscn opened this issue Jun 21, 2023 · 6 comments

Comments

@zhangzscn
Copy link

No description provided.

@skanti
Copy link

skanti commented Jun 22, 2023

Hi @zhangzscn,
In order to back-project points from the depth map there are 3 steps to be done:

  1. Unproject the pixels (u,v) of the raster into the ray direction using the unproject function
  2. Replace the z value of the unprojected ray with the depth value d from the depth map of the corresponding pixel
  3. Apply the rig and world transformation to transform the point cloud to world space

In a later update, we are considering to provide an example code that performs this process.

@zhangzscn
Copy link
Author

How to parse instances images to obtain the corresponding object class?@skanti

@SamirAroudj
Copy link

SamirAroudj commented Jul 10, 2023

Thank you for your interest in the dataset, @zhangzscn!
We will provide per-scene mappings from the object instance image IDs to object classes in the next update.
Given an instance image pixel value/object ID you will then be able to look up the class.
The mappings will likely be provided as json files (or in a very similar way, one for each scene) as this example demonstrates:

instance_id_to_object_class = {
    "0": "empty_space",
    "1": "background",
    "2": "wall",
    "3": "wall",
    "4": "wall",
    "5": "ceiling",
    "6": "floor",
    "7": "wall",
    "8": "wall",
    "9": "wall",
    "10": "wall",
    "11": "ceiling",
    "12": "floor",
    "13": "wall",
    "14": "window",
    "15": "window",
    "16": "window",
    "17": "window",
    "18": "bed",
    "19": "bed",
    "20": "bed",
    "21": "cabinet",
    "22": "cabinet",
    "23": "chair",
    "24": "chair",
    "25": "chair",
    "26": "chair",
    "27": "chair",
    "28": "chair",
    "29": "clothes_rack",
    "30": "container_or_basket",
    "31": "dresser",
    "32": "fan",
    "33": "lamp",
    "34": "lamp",
    "35": "mirror",
    "36": "mirror",
    "37": "picture_frame_or_painting",
    "38": "picture_frame_or_painting",
    "39": "picture_frame_or_painting",
    "40": "picture_frame_or_painting",
    "41": "picture_frame_or_painting",
    "42": "picture_frame_or_painting",
    "43": "picture_frame_or_painting",
    "44": "pillow",
    "45": "plant_or_flower_pot",
    "46": "plant_or_flower_pot",
    "47": "plant_or_flower_pot",
    "48": "plant_or_flower_pot",
    "49": "rug",
    "50": "shelf",
    "51": "sofa",
    "52": "table",
    "53": "table",
    "54": "table",
    "55": "table",
    "56": "table",
    "57": "utilities",
    "58": "bed",
    "59": "bed",
    "60": "bed",
    "61": "cabinet",
    "62": "cabinet",
    "63": "chair",
    "64": "chair",
    "65": "chair",
    "66": "chair",
    "67": "chair",
    "68": "chair",
    "69": "clothes_rack",
    "70": "container_or_basket",
    "71": "door",
    "72": "door",
    "73": "dresser",
    "74": "fan",
    "75": "lamp",
    "76": "lamp",
    "77": "mirror",
    "78": "mirror",
    "79": "picture_frame_or_painting",
    "80": "picture_frame_or_painting",
    "81": "picture_frame_or_painting",
    "82": "picture_frame_or_painting",
    "83": "picture_frame_or_painting",
    "84": "picture_frame_or_painting",
    "85": "picture_frame_or_painting",
    "86": "pillow",
    "87": "plant_or_flower_pot",
    "88": "plant_or_flower_pot",
    "89": "plant_or_flower_pot",
    "90": "plant_or_flower_pot",
    "91": "rug",
    "92": "shelf",
    "93": "sofa",
    "94": "table",
    "95": "table",
    "96": "table",
    "97": "table",
    "98": "table",
    "99": "utilities",
    "100": "container_or_basket",
    "101": "door",
    "102": "door",
    "103": "table",
    "104": "container_or_basket",
    "105": "door",
    "106": "door",
    "107": "table",
    "108": "door",
    "109": "door",
    "110": "door",
    "111": "door",
    "112": "door",
    "113": "door",
    "114": "door",
    "115": "door",
    "116": "door",
    "117": "door",
    "118": "door"
}

@zhangzscn
Copy link
Author

looking forward to it

@SeaOtocinclus
Copy link
Contributor

@zhangzscn If you have all the answers to your question, please do consider closing this issue. Else please continue to ask questions and we will do our best to reply. Thank you.

@anassmu
Copy link

anassmu commented Dec 13, 2023

if you still need help to create pointcloud from rgb and depth informations for ASE, like, i'll put the code :), you should load images, undistort them ( be defining a calibration instance depending on the model, is it fisheye or linear ..) then use the unproject function to get 3d points (scale them by depth) and add color information from rgb or instance images.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants