Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add set_regularization #54

Merged
merged 6 commits into from
Feb 21, 2019
Merged

Add set_regularization #54

merged 6 commits into from
Feb 21, 2019

Conversation

Tyler-D
Copy link
Contributor

@Tyler-D Tyler-D commented Feb 18, 2019

Add set_regularization function. The config and model itself is updated correctly.
The train loss includes softmax loss and regularization loss in my train scripts.

@qubvel
Copy link
Owner

qubvel commented Feb 18, 2019

Hi, @Tyler-D
What about gamma and beta regularizers in batchnorm layer?
Could you explain more detailed how did you check that regularization is applied? I mean is it possible to have some test to check it?

@Tyler-D
Copy link
Contributor Author

Tyler-D commented Feb 19, 2019

OK. There is no regularizer for Batchnorm layer. I can add it for optional argument (Though I haven't use this kind of regularization in my work).

I checked the total loss produced by Keras, which includes softmax loss and all the layer's regularization loss. Also I tracked the softmax loss only through metrics option. And before setting regularizers, total loss = softmax loss; after setting regularizers, total loss > softmax loss.

AFAIK, Keras cannot track regularization loss separately (The model adds regularization loss layer by layer during compile). I'm not sure how to build up a test for this function. It seems like to test the Keras' functionality. Any suggestion ?

@qubvel
Copy link
Owner

qubvel commented Feb 19, 2019

I come up with following test:

...
x = np.ones((1,32,32,3))
y = np.ones((1,32,32,1))

model = ... # load any kind of model
new_model = set_regularization(model, l2(0.01))

# define loss function that return 0
def my_loss(gt, pr)
    return pr * 0

model.compile('Adam', loss=my_loss, metrics=['binary_accuracy'])
new_model.compile('Adam', loss=my_loss, metrics=['binary_accuracy'])

loss_1, _ = model.test_on_batch(x, y)
loss_2, _ = new_model.test_on_batch(x, y)

assert loss_1 == 0.
assert loss_2 > 0.

I would be happy if you create test/test_utils.py and add this test (see other tests as example) + add regularizers for batchnorm layer. And I will merge this. Thanks in advance.


CASE = ((X1, Y1, Unet('resnet18')))
(X1, Y1, MODEL),
)


def _test_regularizer(model, reg_model, x, y):

def zero_loss(pr, gt):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

def zero_loss(gt, pr):

"""Set regularizers to all layers

Note:
Returned model's config is upated correctly
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

bias_regularizer(``regularizer``): regularizer of bias
activity_regularizer(``regularizer``): regularizer of activity
gamma_regularizer(``regularizer``): regularizer of gamma of BatchNormalization
beta_regularizer(``regularizer``): regularizer of bata of BatchNormalization
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bata -> beta

@Tyler-D
Copy link
Contributor Author

Tyler-D commented Feb 20, 2019

Hi @qubvel, despite the typo you point out, do you have any idea about the error produced by CI:

ValueError: Tensor Tensor("bn_data/beta:0", shape=(3,), dtype=float32_ref) is not an element of this graph.

I ran it correctly in my own environments:
Python 3.5
Keras 2.2.4
Tensorflow 1.9.0

@qubvel
Copy link
Owner

qubvel commented Feb 20, 2019

Hi @Tyler-D
def zero_loss(gt, pr): instead of def zero_loss(pr, gt):
reverse elements, first one is gt (ground truth)

@qubvel
Copy link
Owner

qubvel commented Feb 20, 2019

That does not help :( I will try to test it on my local pc

@qubvel qubvel merged commit 33ec83b into qubvel:master Feb 21, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants