New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
L2 regularization seems to be reduplicated for FTRL optimization #223
Comments
@matricer Thanks for your issue. I will check it as soon as possible. |
@matricer |
@etveritas, |
@matricer I see what you mean.I find that comment in tensorflow, it says:
I guess it's same as tensorflow, they both use both online L2 and shrinkage-type L2, and this two L2 is same value in xlearn. |
@etveritas I get your idea. In tensorflow, in the absence of L1 regularization, FTRL updating gives: |
@matricer yep. |
@aksnzhy @etveritas thanks~ |
L2 regularization seems to be reduplicated for FTRL optimization. Take LR as an example.
Proximal operator in FTRL has cover the L2 regularization, so the former one seems to be reduplicated. FM and FFM have similar problem.
The text was updated successfully, but these errors were encountered: