New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move regularizers to layer definitions? #73
Comments
I see how this could be useful, but wouldn't it significantly complicate the manipulation of layers? How to you envision this would be set up, technically? |
I created a pull request with a working implementation. Let me know what you think. |
hubingallin
pushed a commit
to hubingallin/keras
that referenced
this issue
Sep 22, 2023
* Add recall * Add SensitivitySpecificity metrics * Review comments * Add missing method from math * Fix pytest * Update config dict creation * Remove sentence fragment * Review comments * Add special casing for min/max
kernel-loophole
pushed a commit
to kernel-loophole/keras
that referenced
this issue
Sep 25, 2023
* Add recall * Add SensitivitySpecificity metrics * Review comments * Add missing method from math * Fix pytest * Update config dict creation * Remove sentence fragment * Review comments * Add special casing for min/max
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello,
Great job with keras! I wanted to see what you thought about this before I began hacking on it since it would involve some breaking changes.
It seems to me that the regularizers, i.e. maxnorm, L1 and L2, would be more flexible if they were incorporated into the layer definitions, so that different regularization and/or constraints could be applied at each layer if desired. The reason I bring this up is that I wanted to add a non-negativity constraint at a particular layer but there didn't seem to be a straight-forward way to do so.
Let me know any thoughts.
Best,
Mike
The text was updated successfully, but these errors were encountered: