-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix mup for the layers with AttentionLayerMup #494
Conversation
…as `in_dim` in gps layer
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #494 +/- ##
==========================================
+ Coverage 71.35% 71.52% +0.17%
==========================================
Files 94 93 -1
Lines 8718 8707 -11
==========================================
+ Hits 6221 6228 +7
+ Misses 2497 2479 -18
Flags with carried forward coverage won't be shown. Click here to find out more.
|
assert ( | ||
x[k] % num_heads == 0 | ||
), f"embed_dim={x[k]} is not divisible by num_heads={num_heads}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's needed since there's another assertion in AttentionLayerMup
. @maciej-sypetkowski can you check if we remove that part when scaling by a factor that's not divisible by num_heads
, does it still work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's not needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Tested and it works
env.yml
Outdated
@@ -17,6 +17,7 @@ dependencies: | |||
- pandas >=1.0 | |||
- scikit-learn | |||
- fastparquet | |||
- networkx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it needed now?
Removed double check of embed_dim/num_heads, discussed in PR #494
Changelogs
embed_dim
to the list of keys to look for when doing the mup kwargs@maciej-sypetkowski I think this should fix your issue, although I can't verify it because on my end, when I use the config with
architecture.mup_scale_factor: 2
it works. Basically, I can't reproduce because I don't know how you do your scaling for it to fail, but at least theattn_layer
keys inmup_base_params.yaml
are no longernull
.IMPORTANT
when this PR closes, it will affect the reproducibility of your models if they use
AttentionLayerMup
, such asGPSLayerPyg
since the mup will affect the learning rate of these layers, whether they were previously ignored.