Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Including sparse convolutions inside traditional model #26

Closed
fabvio opened this issue Dec 19, 2017 · 6 comments
Closed

Including sparse convolutions inside traditional model #26

fabvio opened this issue Dec 19, 2017 · 6 comments

Comments

@fabvio
Copy link

fabvio commented Dec 19, 2017

Hi, first of all thank you so much for sharing this amazing library, I'm willing to use it in my MSc thesis.
I have a few questions: If I understood everything correctly, you advise to create a network and forward it an InputBatch created manually. This works well for a network where you have only sparse convolutions. Unfortunately, I have a network where I use a large majority of dense convolutions, and I would like to add some sparse convolutive layers at the end of it. Is it possible to achieve this?
I saw that in the pytorch version of the library there is a DenseToSparse layer that would (probably) solve this issue, but you advise not to use it. What is the reason? Could I port it to Lua, or do you plan to implement it in Lua?
Thanks in advance for your time, and I'm sorry if I misunderstood something.

@btgraham
Copy link
Contributor

Using sparse convolutions only makes sense if the input is spatially sparse. What is your input?

The output of dense convolutions will not be sparse, so you should not use dense convolutions followed by sparse convolutions.

@fabvio
Copy link
Author

fabvio commented Dec 19, 2017

Hi, thanks for your reply.
My input is actually sparse. In my network, after some convolutional layers, I apply a binary mask to the pixels having a low probability to be classified correctly, and I propagate only that set of pixels to the deeper layers. I would like to apply the sparse convolution in those layers.

@btgraham
Copy link
Contributor

That might work. But the learning signal is the dense layers will be limited to the sites that are not filtered out, which could be problematic.
Can you not use sparse filters end-to-end?

@fabvio
Copy link
Author

fabvio commented Dec 19, 2017

I think that this should not be an issue, correct me if I'm wrong. I'll give you some more information about my network, so that you could better understand what I'm trying to do.
After the application of the binary mask to the input, the filtered pixels are not lost, but they are directly connected to a layer where I overlap the results of the sparse convolution (after the application of a SparseToDense module) and the dense convolutive layers. In this way, I could use the last part of my network (the sparse one) to learn difficult features in an efficient way, and the first part (the dense one) to learn both difficult and easy features. Joining the results of the two classification, I should be able to backpropagate the learning signals for both the sites with difficult and easy classification. What do you think about it?

@fabvio
Copy link
Author

fabvio commented Dec 19, 2017

Could I simply implement a layer where I map my input to an InputBatch (in the updateOutput function), like you do in your example, and the sparse gradOutput to a dense gradInput (in the updateGradInput)?

@btgraham
Copy link
Contributor

Please contact me at btgraham@gmail.com so we can discuss this in more detail.
Regards
Ben

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants