Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[keras/optimizer_v2/ftrl.py] Reflect defaults in docstring #44679

Merged
merged 2 commits into from Nov 17, 2020
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
5 changes: 3 additions & 2 deletions tensorflow/python/keras/optimizer_v2/ftrl.py
Expand Up @@ -64,16 +64,17 @@ class Ftrl(optimizer_v2.OptimizerV2):
initial_accumulator_value: The starting value for accumulators.
Only zero or positive values are allowed.
l1_regularization_strength: A float value, must be greater than or
equal to zero.
equal to zero. Defaults to 0.0.
l2_regularization_strength: A float value, must be greater than or
equal to zero.
equal to zero. Defaults to 0.0.
name: Optional name prefix for the operations created when applying
gradients. Defaults to `"Ftrl"`.
l2_shrinkage_regularization_strength: A float value, must be greater than
or equal to zero. This differs from L2 above in that the L2 above is a
stabilization penalty, whereas this L2 shrinkage is a magnitude penalty.
When input is sparse shrinkage will only happen on the active weights.
beta: A float value, representing the beta value from the paper.
Defaults to 0.0.
**kwargs: Keyword arguments. Allowed to be one of
`"clipnorm"` or `"clipvalue"`.
`"clipnorm"` (float) clips gradients by norm; `"clipvalue"` (float) clips
Expand Down