Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some doubts about pseudo labels #7

Closed
Jianghui-Wang opened this issue May 9, 2022 · 2 comments
Closed

Some doubts about pseudo labels #7

Jianghui-Wang opened this issue May 9, 2022 · 2 comments

Comments

@Jianghui-Wang
Copy link

Hi, I am pseudo-tagging the imagenet-1k, and encountering some difficulties.

Firstly, I wonder what would happen if the classes of semeg are more than 255? How to use one channel depth png image to represent them? (Although COCO datasets is only 80 classes, the imagenet is more than 255 classes when fine-tuning)

Secondly, on the example of Colab notebook, the rgb2depth model of DPT could not input any size of imagenet pictures. How could we save all the pseudo labels down before the data augmentation cutting it into 224*224? We need to align the original images with the pseudo labeled image should we?

Thank you for any help.

@roman-bachmann
Copy link
Member

Hi @Chianghui-Wong!

If you need to save more than 255 classes, you could consider saving them as 16 bit PNGs or just as a numpy array. I would recommend the PNG option, since semantic segmentation images are often highly compressible. To do that, you can simply convert a uint16 numpy array to a PIL.Image and save it as a PNG:
Image.fromarray(semseg.astype(np.uint16)).save('semseg.png')

Regarding saving depth images: DPT needs all images to have side lengths that are a multiple of 32 pixels. We therefore resize the image height and width to the closest multiples of 32, process them with the DPT and then resize them back to their original resolution. We also limit the minimum and maximum side lengths to be between 32 and 768 for the DPT input images. During MultiMAE pre-training, we select the same random crops from the RGB, depth and semseg images and resize each to 224x224.

Best, Roman

@Jianghui-Wang
Copy link
Author

Hi @Chianghui-Wong!

If you need to save more than 255 classes, you could consider saving them as 16 bit PNGs or just as a numpy array. I would recommend the PNG option, since semantic segmentation images are often highly compressible. To do that, you can simply convert a uint16 numpy array to a PIL.Image and save it as a PNG: Image.fromarray(semseg.astype(np.uint16)).save('semseg.png')

Regarding saving depth images: DPT needs all images to have side lengths that are a multiple of 32 pixels. We therefore resize the image height and width to the closest multiples of 32, process them with the DPT and then resize them back to their original resolution. We also limit the minimum and maximum side lengths to be between 32 and 768 for the DPT input images. During MultiMAE pre-training, we select the same random crops from the RGB, depth and semseg images and resize each to 224x224.

Best, Roman

I’ve been stuck for so long! Your advice finally opened the door I needed to move forward.
Thank you for your time and guidance.❤️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants