-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make beta tc loss more stable using torch.logsumexp #41
Conversation
Add workaround for model saving with hydra
Added save_checkpoint to experiment configs Added tests for checkpointing
Will also look into using |
Codecov ReportBase: 70.04% // Head: 70.04% // No change to project coverage 👍
Additional details and impacted files@@ Coverage Diff @@
## main #41 +/- ##
=======================================
Coverage 70.04% 70.04%
=======================================
Files 135 135
Lines 7538 7538
=======================================
Hits 5280 5280
Misses 2258 2258
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
Sorry, for the delay getting back to you about this PR. Good catch! Thank you so much for this contribution! EDIT: I notice the old contributions are still part of the history, if there is anything in future it might be easier to start the commits from the main branch after synchronizing with upstream changes 😁 |
The code is from https://github.com/YannDubs/disentangling-vae
The PyTorch implementation uses a more stable implementation than just chaining .log().sum.exp()