-
Notifications
You must be signed in to change notification settings - Fork 19.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
changeable loss weights for multiple output #2595
Comments
You can pass list of alpha = K.variable(0.5)
beta = K.variable(0.5)
model.compile(..., loss_weights=[alpha, beta], ...) and define your own class MyCallback(Callback):
def __init__(self, alpha, beta):
self.alpha = alpha
self.beta = beta
# customize your behavior
def on_epoch_end(self, epoch, logs={}):
self.alpha = self.alpha - 0.1
self.beta = self.beta + 0.1 then pass it into model.fit( ..., callbacks=[MyCallback(alpha, beta)], ...) P.S. Haven't thoroughly tested it. |
The idea works, thanks! def on_epoch_end(self, epoch, logs={}):
if epoch == 2:
K.set_value(self.alpha, K.get_value(self.alpha) / 1.5)
K.set_value(self.beta, K.get_value(self.beta) * 1.5)
logger.info("epoch %s, alpha = %s, beta = %s" % (epoch, K.get_value(self.alpha), K.get_value(self.beta))) |
Hi guys. I can't find any documentation on what the loss_weights parameter actually does. @joelthchao can you provide an intuitive explanation? |
@joelthchao thanks for this. I have read this example actually but I don't see the intuition behind choosing ".2". It says:
Why was this number chosen? What is the intuition behind this choice? That is what I don't understand. |
It's a hyper-parameter, usually we need to adjust it according to 1) the importance of losses 2) the actual value of losses. We do need experiments to choose a correct number, to prevent one loss dominate others. |
Makes sense. Thank you!
…On Sun, Jan 8, 2017 at 1:11 AM, Joel ***@***.***> wrote:
It's a hyper-parameter, usually we need to adjust it according to 1) the
importance of losses 2) the actual value of losses. We do need experiments
to choose a correct number, to prevent one loss dominate others.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2595 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ANU-St6m_0Q-foL64uK80DDAaggJZwvhks5rQH4AgaJpZM4IWsB_>
.
|
@joelthchao I want to update my paramter
How can I change the alpha in the callback to actually change the alpha in the loss function? It is not correct for my application to pass |
@joelthchao don't we need to compile the model again in this case ? |
It doesn't seem like we need to recompile as long as we're using the variables. I can confirm this through running the code myself, and by looking at this comment. Let me know if you find some different behavior. |
@mycal-tucker what do you mean by variables ? If I have function like this.. Will it update the params ?
|
I mean a Keras variable. I think your current setup wouldn't work because you're just updating the numbers. Here's something I did that doesn't perfectly match the Callback class model you've got but that does work. Porting my code over to your setup shouldn't be too much trouble.
Does this make sense to you? As I point out in the comments, my static variable workaround is probably not the most elegant way of getting things working, but it at least works. |
TypeError: ('Not JSON Serializable:') |
I've come across the same error. |
I have tried the trick from https://github.com/keras-team/keras/issues/9444#issuecomment-395260154. |
Changing the loss_weights by the custom callback as described (@joelthchao and others) doesn't seem to work. The k.variable values are changed, but the total loss calculation doesn't take the new values in consideration. Consider this example where loss_weights were supposed to simply switch from the loss of one output (0, 1) to the loss of other output (1, 0)
specific values might change, but the total loss doesn't seem to use the updated loss_weights values in its calculation, even though they are printed fine example of output
what am I missing? |
I think that you shouldn't reassign the variables as K.variable in: Instead just assign values, like this: |
I did this and it was working.. Thanks @joelthchao & @xingdi-eric-yuan :) class LossWeightAdjust(Callback):
def __init__(self, alpha, beta, gamma, delta):
self.alpha = alpha
self.beta = beta
self.gamma = gamma
self.delta = delta
# customize your behavior
def on_epoch_end(self, epoch, logs):
losses = np.array([v for k,v in logs.items() if k in ['val_starts_0_loss', 'val_stops_0_loss', 'val_starts_1_loss', 'val_stops_1_loss']], dtype=np.float64)
losses = (losses - 0.5*losses.min()) / (losses.max() - 0.5*losses.min())
losses = losses/np.sum(losses)
K.set_value(self.alpha, losses[0])
K.set_value(self.beta, losses[1])
K.set_value(self.gamma, losses[2])
K.set_value(self.delta, losses[3])
print("\n Loss weights recalibrated to alpha = %s, beta = %s, gamma = %s, delta = %s " % (np.round(losses[0],2),
np.round(losses[1],2),
np.round(losses[2],2),
np.round(losses[3],2)))
logger.info("Loss weights recalibrated to alpha = %s, beta = %s, gamma = %s, delta = %s " % (K.get_value(self.alpha), K.get_value(self.beta), K.get_value(self.gamma), K.get_value(self.delta)))
span_detection_model = build_model()
alpha = K.variable(0.25)
beta = K.variable(0.25)
gamma = K.variable(0.25)
delta = K.variable(0.75)
span_detection_model.compile(..., loss_weights= "starts_0":alpha,"stops_0":beta,"starts_1":gamma,"stops_1":delta}) |
This does not work for me.
|
Hi all, what's an easy way to set changeable loss weights for multiple output, for example, is there a way to modify the loss weights in callback?
The text was updated successfully, but these errors were encountered: