Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference #3

Closed
mayalenE opened this issue Mar 24, 2020 · 1 comment
Closed

Inference #3

mayalenE opened this issue Mar 24, 2020 · 1 comment

Comments

@mayalenE
Copy link

Thank you very much for the code which is well written and easy to launch for training. I could train the generative model version however if I have troubles understanding how to run inference on external dataset from the saved model, could you provide some script to run testing?
For instance for reconstruction, do you always use the last expert, let say at the end of split-mnist if I want to retest on 0/1?

@soochan-lee
Copy link
Owner

Thank you for your interest in our work!

The code does not save any checkpoints during training. You should implement the function on your own.

The evaluation of BPD is implemented in data.py. The evaluation code eventually calls the forward method of NDPM, which is designed to output the log-likelihood of input data (log p(x)). It combines the outputs of each expert (log p(x | z=k)) with the prior (log p(z)) to compute the final log-likelihood (log p(x)). So it is not true that it simply chooses one expert. Note that the output of the best expert becomes dominant during the combination.

I hope this answers your question.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants