Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example of the 9-dimvector version of Semantic Segmentation #7

Closed
myhussien opened this issue Apr 28, 2017 · 6 comments
Closed

Example of the 9-dimvector version of Semantic Segmentation #7

myhussien opened this issue Apr 28, 2017 · 6 comments

Comments

@myhussien
Copy link

myhussien commented Apr 28, 2017

Hi Charles,

In the paper you mentioned that each point in the semantic segmentation task what represented by a 9-dim vector. I couldn't find the corresponding code/network that expected such input. How do you deal with the other 6-elements in the point representation? Does the first set of filters become [1,9]? do you process the RGB data separately? Can you upload an example with Stanford 3D data?

Best,

@myhussien
Copy link
Author

myhussien commented May 2, 2017

Hi Charles,

I hope you have time to answer my previous question. On the other hand, why do you define the weights and the biases in the transformation network explicitly instead of having a fully connected layer with 9 outputs and initialize the weights and the biases internally?

Best,

@charlesq34
Copy link
Owner

Hi, myhussien

The first layer will have 9-channel input for the MLP. Using a [1,9] filter will do that.
You can add transformer networks (input is only XYZ channel) as well but the performance is similar without.

As to the 9 channels, they include:
Original XYZ: we kept Z as it is and shift XY with regard to the center of the block s.t. the center will have X=0, Y=0
Normalized X'Y'Z': it's normalized with regard to the entire room so that the corners will be (0,0,0), (0,1,0) etc.
RGB values: it's converted to 0~1 as float

The weights in transformation network are defined that way because we want to initialize the transformation to identity matrix. You can also use a fully connected layer but you need a special bias initializer.

Hope it helps!
Cheers,
Charles

@myhussien
Copy link
Author

myhussien commented May 2, 2017

Can you elaborate more on what do you mean by 9-channels? did you mean 9-columns and the data shape would be [B x N x 9 x 1]? or is it [B x N x 1 x 9], which won't work with [1,9] filter?

@charlesq34
Copy link
Owner

charlesq34 commented May 2, 2017

hi myhussien,

Either way is fine I think. For (B,N,9,1) you will use (1,9) kernel. For (B,N,1,9) you can use (1,1) kernel.
Or you can actually use conv1d as well or fully_connected. The performance difference is small.

Best,
Charles

@myhussien
Copy link
Author

myhussien commented May 2, 2017

Thanks, that was helpful!

I just can't get my head around the fact that you are applying a filter over completely unrelated values i.e [X,Y,Z,R,G,B,x,y,z], and then collapsing those to a single value. It is very surprising to me that it is actually working. Are you planning to upload an example showing this semantic segmentation part with RGB? I look forward to see it in action.

Good Work!

@charlesq34
Copy link
Owner

By feeding all channels, we are leaving the heavy-lifting jobs to the neural network :)
It's possible to add more structure or regularization into the model though..

I will try to organize and clean some code on semantic segmentation in scenes. Probably not recently but it's on my todo list.

I'm closing the issue now. Let me know if you have more questions.

@mikacuy mikacuy mentioned this issue Sep 18, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants