Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Serialisation doesn't allow variables as loss_weights #9444

Closed
kmcnaught opened this issue Feb 21, 2018 · 8 comments
Closed

Serialisation doesn't allow variables as loss_weights #9444

kmcnaught opened this issue Feb 21, 2018 · 8 comments
Labels
To investigate Looks like a bug. It needs someone to investigate.

Comments

@kmcnaught
Copy link

kmcnaught commented Feb 21, 2018

I've been training a model with multiple losses, where the loss weights need updating during training via a callback. This is working fine, except when I try to save the model, when I get an error:

TypeError: ('Not JSON Serializable:', <tf.Variable 'Variable:0' shape=() dtype=float32_ref>)

I've written two test cases, one passing, one failing, to demonstrate:
test cases

I'm running on OSX, Python 3.6.4, Tensorflow backend, CPU only. I freshly installed everything for the test.
pip list:

absl-py (0.1.10)
appdirs (1.4.3)
attrs (17.4.0)
bleach (1.5.0)
h5py (2.7.1)
html5lib (0.9999999)
Keras (2.1.4)
Markdown (2.6.11)
numpy (1.14.1)
packaging (16.8)
pip (9.0.1)
pluggy (0.6.0)
protobuf (3.5.1)
py (1.5.2)
pyparsing (2.2.0)
pytest (3.4.1)
PyYAML (3.12)
scipy (1.0.0)
setuptools (38.5.1)
six (1.10.0)
tensorflow (1.5.0)
tensorflow-tensorboard (1.5.1)
Werkzeug (0.14.1)
wheel (0.30.0)

@Dref360 Dref360 added the To investigate Looks like a bug. It needs someone to investigate. label Feb 21, 2018
@janzd
Copy link

janzd commented Jun 7, 2018

I experienced the same issue but it seems I've figured out a workaround. Define a custom loss function with an additional argument using a closure as described in comments in issue 2121. You will set this function as a loss for your model and pass a K.variable as an argument to the function. That will be your loss weight. K.variable is updated using a custom callback, as described in issue 2595.

To give you a more clear idea:

def dice_loss(training_mask, loss_weight):
    def loss(y_true, y_pred):
        eps = 1e-5
        intersection = tf.reduce_sum(y_true * y_pred * training_mask)
        union = tf.reduce_sum(y_true * training_mask) + tf.reduce_sum(y_pred * training_mask) + eps
        loss = 1. - (2. * intersection / union)
        return loss * loss_weight
    return loss

score_map_loss_weight = K.variable(1.)
loss_weight = LossWeight(score_map_loss_weight) # this is your custom callback

model.compile(loss=[dice_loss(training_mask, score_map_loss_weight), rbox_loss(training_mask)], optimizer=opt)

@sallamander
Copy link
Contributor

@Dref360 has there been any more investigation into this, or is the only current workaround the solution that @kurapan has proposed?

@sallamander
Copy link
Contributor

@kurapan thanks for that suggested fix, it worked for me!

@dongfang91
Copy link

I had the same problem. I did a very dirty fix, add a few lines of code in saving model before

raise TypeError('Not JSON Serializable:', obj)

    else:
        return float(obj.get_value() or 0)

I hope someone could figure out a better solution.

@was84san
Copy link

Is there any solution for this bug? I think should keras people should fix it in later versions. Is there any possibility to update this issue.

if the 'loss_weights' arguments works with variable, it will be better than using a customize loss function

@vsatyakumar
Copy link

@fchollet Could really use a fix for this! Thanks.

@patebel
Copy link

patebel commented Aug 16, 2019

would need that fix too. The Workaround from @kurapan does work, however it implies that if one still want to log the unweighted loss values one need to introduce additional metrics.

@was84san
Copy link

was84san commented Feb 9, 2020

It is not just throwing this error when trying to save the model at the end of an epoch, the loss_weights do nothing to the total loss. My loss weights for two output network is {'out1': alpha, 'out2': 1-alpah} and alpha begins with 0 and ends to 1 by using some update equation which depends on the epoch number and by using callback of course. so the loss for 'out1' should be zero for the first epoch , but both losses are working without any multiplication with the loss_weights. The only solution for me is to build a custom loss function, but I still need this issue to be fixed because you want to use the build-in keras losses and sometimes it is hard to build the same loss by yourself. So is there

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
To investigate Looks like a bug. It needs someone to investigate.
Projects
None yet
Development

No branches or pull requests

9 participants