Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RBM score. There might be an inconsistency in parameter expectations #3049

Closed
vlovsky opened this issue Mar 15, 2017 · 5 comments

Comments

@vlovsky

This comment has been minimized.

Copy link
Author

commented Mar 15, 2017

Theoretically KLD should be converging since we are doing Gibbs samples. It's not really happening with the standard KL_DIVERGENCE loss function. I ran few more tests with standard and custom loss functions.

In the first example you can observe that the KLD didn't fully converge after 1000 iterations even though the input = output and input ~ probabilities. Also, you can see that the KLD fluctuates significantly (which theoretically should not be happening): https://gist.github.com/anonymous/914cf199af3d6e55ee1215bea3f0682f

In the next test I used custom loss function where I removed activation and used probabilities for z. It's very clear that the score is steadily converges (as it should for KLD) as the probabilities get closer to the empirical values.
https://gist.github.com/anonymous/f21ad3fd1bb630c76daac7a2a293a899

@eraly eraly self-assigned this Mar 15, 2017

@eraly

This comment has been minimized.

Copy link
Contributor

commented Mar 16, 2017

RBMs need to be scrubbed and it is quite likely that RBMs will need an overhaul...They have not been very high on the priority list so far. I am looking at them and will continue to do so. To answer your questions -

Not all loss functions expect probabilities. MSE, MAE etc etc. I am unsure why the KLD loss function is even set on the RBMs since they are supposed to pretrain with constrastive divergence.

@vlovsky

This comment has been minimized.

Copy link
Author

commented Mar 22, 2017

The best loss function I found so far is the squared loss, and since RBM has both input and output it just needs something that computes it without doing any conversion on the parameters.

@eraly eraly assigned agibsonccc and unassigned eraly Mar 22, 2017

@eraly

This comment has been minimized.

Copy link
Contributor

commented Aug 29, 2017

We are dropping support for RBMs in favour of newer and more effective methods. Please take a look at variational autoencoders
https://github.com/deeplearning4j/dl4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/unsupervised/variational

@eraly eraly closed this Aug 29, 2017

@lock

This comment has been minimized.

Copy link

commented Sep 25, 2018

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Sep 25, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
3 participants
You can’t perform that action at this time.