You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you very much for the code which is well written and easy to launch for training. I could train the generative model version however if I have troubles understanding how to run inference on external dataset from the saved model, could you provide some script to run testing?
For instance for reconstruction, do you always use the last expert, let say at the end of split-mnist if I want to retest on 0/1?
The text was updated successfully, but these errors were encountered:
The code does not save any checkpoints during training. You should implement the function on your own.
The evaluation of BPD is implemented in data.py. The evaluation code eventually calls the forward method of NDPM, which is designed to output the log-likelihood of input data (log p(x)). It combines the outputs of each expert (log p(x | z=k)) with the prior (log p(z)) to compute the final log-likelihood (log p(x)). So it is not true that it simply chooses one expert. Note that the output of the best expert becomes dominant during the combination.
Thank you very much for the code which is well written and easy to launch for training. I could train the generative model version however if I have troubles understanding how to run inference on external dataset from the saved model, could you provide some script to run testing?
For instance for reconstruction, do you always use the last expert, let say at the end of split-mnist if I want to retest on 0/1?
The text was updated successfully, but these errors were encountered: