-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to draw flatness curve in Figure 3? #11
Comments
Hi, thanks to the interest in our study. We first sample an unit direction vector and compute the loss gap by changing the model parameter according to the radius gamma. The parameter difference can be computed by Simple pytorch-style pseudo code is: n_params = num_parameters(model)
direction_vector = torch.randn(n_params)
unit_direction_vector = direction_vector / torch.norm(direction_vector)
for gamma in gamma_list:
noised_model = get_noised_model(model, unit_direction_vector * gamma)
loss_gap = evaluate(noised_model) - evaluate(model) |
got it! Very clear! Thanks a lot! |
The loss gap I get seems to be wrong. Did you solve this problem? |
|
Hi,
Thank you so much for providing this repo, the work is awesome!
And how can we reproduce the loss gap curve in Figure 3 of this paper? How to add the gamma on the model parameter and what is the metric of the distance in X-axis? I flat the model parameter dict into one vector and add a noise vector with norm 1.0 and get the loss gap about 0.2 on p domain test, I must have made a mistake on the Monte-Carlo approximation sampling.
Thanks a lot!
The text was updated successfully, but these errors were encountered: