-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Semantic annotation for bounding boxes? #13
Comments
Hi, @neyrinck Currently, the semantic annotations only saved in the semantic.png. You can get infer the semantic label for the 3D bounding box from semantic.png and instance.png in the perspective images. Best, |
Thanks! And (it seems) from the panoramic images, too, which is easier! |
The current version of the dataset does not include instance.png for the panoramic images, but we are currently working on it. Hope this helps. |
Ah right, thank you ... but sorry if I am missing something: is
instance.png necessary for inferring the semantic labels? I imagined I
could just ignore wall/floor/ceiling etc. categories from semantic.png.
…On Fri, Apr 24, 2020 at 10:22 AM Jia Zheng ***@***.***> wrote:
The current version of the dataset does not include *instance.png* for
the panoramic images, but we are currently working on it. Hope this helps.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#13 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXQKHLEUKM4QHFDXLUE3TLROG4EHANCNFSM4MPZMJXA>
.
|
You could get an accurate label for each 3D bounding box with the instance label. Without instance label, the semantic label can be determined by the majority voting within each bounding box. |
Hi @bertjiazheng ! Great dataset indeed ! I ended up finding the same issue when exploring the data : having semantic labels directly associated with the 3D bounding boxes would be so much more helpful ! Especially since it would likely be more accurate to do the association from the source data you used for producing the dataset than inferring it from panoramic pictures (that may hide much of the information because of object occlusion). As far as you know, has any user tried to do this completely ? @neyrinck, @micaeltchapmi, did any of you succeed in computing this inference ? Could you please consider providing these annotations ? This would really help in working with the dataset, and would not require any new information in itself. Thank you very much for your time and consideration. |
Hi, great dataset! I was wondering if the object semantic annotations, currently only in semantic.png files it seems, could somehow be inferred from the 3D bounding box id's in bbox_3d.json, or in any other 3D data? If not, could this easily be added? Thank you!!!
The text was updated successfully, but these errors were encountered: