Skip to content

Fix KL computation #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 14, 2021
Merged

Fix KL computation #2

merged 1 commit into from
Jul 14, 2021

Conversation

y0ast
Copy link
Contributor

@y0ast y0ast commented Jul 14, 2021

Hello,

I think there might be a few problems in your model definitions.

In particular:

  • in resnet_flipout.py and resnet_variational.py you only sum the kl of the last block inside self.layerN
  • in resnet_flipout_large.py and resnet_variational_large.py you check for is None while you probably want is not None or actually no check at all since it can't be None in any reasonable setting. Also the str(layer) check is odd since it contains a BasicBlock or BottleNeck object (you're looping over an nn.Sequential of blocks). In fact in this code that string check is very likely superfluous (didn't test this, but I did include it in this PR as example)

I hope you can confirm and perhaps fix these issues, which will help me (and maybe others) in building on your nice codebase :)

Copy link
Contributor

@ranganathkrishnan ranganathkrishnan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the fixes! The kl_sum addition bug was in the list to be fixed. The str(layer) check was present as I was using BatchNorm2d layer directly from torch.nn in the sequential block. Now that check is not necessary as the wrapper from bayesian_torch.layers.batchnorm is used.

@ranganathkrishnan
Copy link
Contributor

ranganathkrishnan commented Jul 14, 2021

Hello @y0ast ,

Thank you for your contribution to the repo. These bugs were in the list to be fixed along with other fixes. Thanks for the PR, merging the commit.

@ranganathkrishnan ranganathkrishnan merged commit 91650f6 into IntelLabs:main Jul 14, 2021
@y0ast
Copy link
Contributor Author

y0ast commented Jul 15, 2021

Thanks for merging! Are you able to share the list of other fixes you are planning? (So I can avoid those bugs in my own work building on your codebase).

@ranganathkrishnan
Copy link
Contributor

I have pushed the commits (09b8d0a and 6a370d9) to the repo just before merging your PR. If you checkout from the head of main branch, these fixes should be already in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants