-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About layer regularization. #39
Comments
Hi, @Tyler-D |
Well, I think it would be better if there is a function that adding specific regularizer to all layers. |
According to this and this issues it can be implemented as follows: def set_regularization(model,
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None):
for layer in model.layers:
# set kernel_regularizer
if kernel_regularizer is not None and hasattr(layer, 'kernel_regularizer'):
layer.kernel_regularizer = kernel_regularizer
# set bias_regularizer
if bias_regularizer is not None and hasattr(layer, 'bias_regularizer'):
layer.bias_regularizer = bias_regularizer
# set activity_regularizer
if activity_regularizer is not None and hasattr(layer, 'activity_regularizer'):
layer.activity_regularizer = activity_regularizer
# exmaple
set_regularization(model, kernel_regularizer=keras.regularizers.l2(0.0001))
model.compile(...) # you have to recompile model if regularization is changed I did not test this code, if it works it can be added as utils function. |
Cool, that's exactly the function I want. I could help to add it, what kind of test you needed? |
Actually, I'm thinking if there is possibility to build a segmentation task pipeline upon your repo including: train, evaluation, some data-loader for public dataset (e.g. pascal-voc, coco) and even an export tool to export the keras model to inference framework (e.g TensorRT). Then I'm sure this repository can be extremely appealing. |
Just test that it works as expected: Regularization appears in conv/dense layers and applied during training. |
Segmentation pipeline is a cool idea, however I think it should be build in other repo or written as an example part here. |
I've tried the code you offered in my train scripts and thing is that only the model config is changed. And after investigation, I found this. And a workround can be found here:
It seems not an elegant way to do the things. I'm thinking how to refactor it. |
Yes, I agree. this is not elegant way.. def set_regularization(model,
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None):
for layer in model.layers:
# set kernel_regularizer
if kernel_regularizer is not None and hasattr(layer, 'kernel_regularizer'):
layer.kernel_regularizer = kernel_regularizer
# set bias_regularizer
if bias_regularizer is not None and hasattr(layer, 'bias_regularizer'):
layer.bias_regularizer = bias_regularizer
# set activity_regularizer
if activity_regularizer is not None and hasattr(layer, 'activity_regularizer'):
layer.activity_regularizer = activity_regularizer
out = model_from_json(model.to_json())
out.set_weights(model.get_weights())
return out
new_model = set_regularization(model, kernel_regularizer=keras.regularizers.l2(0.0001))
new_model.compile(...) |
Hi @Tyler-D, ok, no problem |
Try this:
|
I'm curious why there is no regularizer option for layer? Is that a trick that training without regularization ?
The text was updated successfully, but these errors were encountered: