Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StreetHazards color encoding for visualization #15

Closed
ShravanthiPatil opened this issue Jul 30, 2021 · 6 comments
Closed

StreetHazards color encoding for visualization #15

ShravanthiPatil opened this issue Jul 30, 2021 · 6 comments

Comments

@ShravanthiPatil
Copy link

Hello,
The color encoding followed for the visualizations of the semantic maps for the street hazards dataset is unclear. Could you please help me identify the encoding followed?
Line 17-33 of create_dataset.py, provides a list of colors used for street hazards, but they don't really match the colors on the semantic map generated nor the ground truth labels.
I believe the color encoding from ADE is not being used here.
Where can I find the exact color-coding that is being used?
Regards,
Shravanthi

@xksteven
Copy link
Collaborator

I will double check on it later tonight. I believe the colors should be the same as what's on the semantic maps.

Also your correct in that we do not follow the ADE color scheme as you mentioned. For the ground truth labels or maps the colors should correspond to uint values of the semantic segmentation maps. As in the colors are converted from RGB to grey scale 0,1,2, etc.

@xksteven
Copy link
Collaborator

xksteven commented Jul 31, 2021

Hey so you can use the script below to undo the conversion from uint back to colors. The main difference that I had forgotten was that the counter starts at 1. The semantic segmentation pytorch code ignores the index 0 and this was our hack to get the code to learn the background class. I attached one of the examples below to demonstrate it working in converting back to the original colors.

import numpy as np
from PIL import Image as image
import os
root = "annotations/test/t5/" 
test_images = os.listdir(root) 
#StreetHazards colors 
colors = np.array([[ 0,   0,   0], # // unlabeled     =   0,
[ 70,  70,  70], # // building      =   1,
[190, 153, 153], # // fence         =   2, 
[250, 170, 160], # // other         =   3,
[220,  20,  60], # // pedestrian    =   4, 
[153, 153, 153], # // pole          =   5,
[157, 234,  50], # // road line     =   6, 
[128,  64, 128], # // road          =   7,
[244,  35, 232], # // sidewalk      =   8,
[107, 142,  35], # // vegetation    =   9, 
[  0,   0, 142], # // car           =  10,
[102, 102, 156], # // wall          =  11, 
[220, 220,   0], # // traffic sign  =  12,
[ 60, 250, 240], # // anomaly       =  13,
]) 
for im_path in test_images: 
    im = image.open(root+im_path)
    pic = np.array(im)
    new_img = np.zeros((720, 1280, 3))
    for index, color in enumerate(colors):
        new_img[pic==(index+1)] = colors[index]
        new_img = image.fromarray(new_img.astype('uint8'), 'RGB')
        new_img.save(root+"rgb_"+str(im_path))

Example 300 converted back into standard RGB colors.
rgb_300

Hope you find the dataset helpful :)

@ShravanthiPatil
Copy link
Author

Hi Steven,
Thank you for the quick response. Yes. the dataset is extremely helpful, thank you! :)
I still have some questions. On running eval_ood.py , the semantic map are generated and concatenate with corresponding input and ground truth. I see the colors don't
300
match the image you attached (image 300 in T5 folder). I believe the color encoding is from ADE instead. hence the confusion.

The colors used for road, sidewalk, fence, building, and novel objects
300
vary. The color encoding list you provided does not list sky. Any specific reason? Please clarify

@xksteven
Copy link
Collaborator

I can update the eval_ood.py script. The colors as mentioned in the earlier post had an off by one issue. So there were no zeroes output from the labels.

The reason we couldn't include sky was due to time and resource constraints. Adding a semantic label into Carla for the rays that hit infinity "sky" was possible but hard to finish at the time the dataset was released. We were literally helping in it's development but we didn't want to delay the paper and dataset months or possibly over a year to add in one extra label. Hope that helps clarify the reason we didn't include sky.

@ShravanthiPatil
Copy link
Author

Hi Steven,
Thank you for clarifying! will look forward to the changes in eval_ood.py script.
Regards,
Shravanthi

@xksteven
Copy link
Collaborator

xksteven commented Aug 1, 2021

Could you add [0,0,0] here and then rerun it? I think that should fix the issue.

You'd only need to rerun the over writing of color150.mat here.

I won't be able to test it until tomorrow. I think that's the quickest way to recreate our color scheme.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants