Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fused AdamW_SGD optimizer issues #17

Closed
vealocia opened this issue Jul 6, 2022 · 4 comments
Closed

Fused AdamW_SGD optimizer issues #17

vealocia opened this issue Jul 6, 2022 · 4 comments

Comments

@vealocia
Copy link

vealocia commented Jul 6, 2022

Hi, authors! Thanks for your awesome work!
I'm confused about the usage of fused AdamW_SGD optimizer as described in paper Appendix C, paragraph implementation details.
It's said you use AdamW with 1e-3 lr and 0.05 wd for ViT vision encoder, and SGD with 0.02 lr and 1e-4 wd for text transformer.
However, in your configuration, ViT-B/32 is also optimized by SGD instead of fused AdamW_SGD. So which optimizer is your choice in experiment actually?
And, if you use fused AdamW_SGD optimizer just as said in paper, why did you use it? CLIP only uses AdamW optimizer. Is this beneficial to CLIP?
Looking forward for your reply!😁

@zlccccc
Copy link
Collaborator

zlccccc commented Jul 6, 2022

In the DeCLIP open-source model in our paper, the AdamW_SGD optimizer was used for optimization. The settings here is slightly different from the one described in the paper because the test configuration was changed from another configuration of the same model (as you can see, the training setup is all for the YFCC dataset).

@zlccccc
Copy link
Collaborator

zlccccc commented Jul 6, 2022

We use the AdamW_SGD optimizer in the DeCLIP paper during training because we experimentally found that the language encoder tends to crash using the AdamW optimizer when using noisy labels.
This is not a problem on the YFCC dataset, so the AdamW optimizer also works well.

@vealocia
Copy link
Author

vealocia commented Jul 7, 2022

Get it! Thanks for your answers.

@vealocia vealocia closed this as completed Jul 7, 2022
@vealocia
Copy link
Author

vealocia commented Jul 7, 2022

By the way, @zlccccc, can you share any training logs of your experiments (either CLIP or DeCLIP or others)? That will be helpful and greatly appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants