You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I came across your paper in arxiv and it's nice to see the code being open-sourced. I am also interested in autoencoders and I'm applying it to my research on protein function prediction. Nice work and good results, I just have some questions on the K-competitive layer:
I'd like to clarify how the positive and negative neurons were chosen. If I understood correctly, they are assigned as a result of the feedforward step in z. Is this correct?
If we assign a value of k greater than 2, and obtain multiple positive winners. How do we know which one takes which positive loser? Or do all of them "soak up" the energy?
Were there any previous research on the effects of reallocating energy in a neural network? Were these inspired by RBMs? What is the use of redistributing the energy instead of letting them be (a bit similar to winner-take-all AE)?
That's all and thank you so much! 😄
The text was updated successfully, but these errors were encountered:
Thank you for your interest! As for your questions,
Yes.
Yes. All positive winners soak up the (amplified) energy of positive losers.
That's a great question! Actually, I haven't seen any work on the effects of reallocating energy in a NN. Philosophically, the winner-take-all strategy can make competition more pronounced. Mathematically, it can effectively change the back-propagation path and therefore change the way we update weights. But in general, I cannot determine whether redistributing the energy is better than letting them be or vice versa. But in our case, I have an intuitive proof in the paper (sec. 3.2) which explains the advantages of KATE over k-sparse AE (which simply makes losers inactive).
Hi!
I came across your paper in arxiv and it's nice to see the code being open-sourced. I am also interested in autoencoders and I'm applying it to my research on protein function prediction. Nice work and good results, I just have some questions on the K-competitive layer:
That's all and thank you so much! 😄
The text was updated successfully, but these errors were encountered: