Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

changing loss weight during training #6446

Closed
hakantekbas opened this issue Apr 29, 2017 · 7 comments
Closed

changing loss weight during training #6446

hakantekbas opened this issue Apr 29, 2017 · 7 comments
Labels
type:support User is asking for help / asking an implementation question. Stackoverflow would be better suited.

Comments

@hakantekbas
Copy link

hakantekbas commented Apr 29, 2017

Hi,

I am trying to change loss weight during training. When i check source code, loss weight is set during compiling. When i call fit, compile process is over. Is there any easy way to accomplish this target easily. I saw some issues related with learning rate but changing code "as in learning rate example" can affect other parts.

As i have limited experience on theano and keras, I need your help. Thanks (keras version 1.1.1)

@fchollet
Copy link
Member

I suppose you are referring to the loss_weight argument in compile. There are two ways you could do this:

  • (simple) just recompile your model with a new loss_weight argument value when you want to adjust the loss weights.
  • Pass symbolic tensors as the loss_weight values, and change their values during training via a callback.

@fchollet fchollet added the type:support User is asking for help / asking an implementation question. Stackoverflow would be better suited. label Apr 30, 2017
@tofigh-
Copy link

tofigh- commented May 24, 2017

@fchollet there seems to be a bug in there. Changing the loss_weights in the middle of the training seems to have no effect and the training continues with the initial weights. following is an snippet of the code I used to test loss_weights update. it successfully update the values of alpha and beta but this has no effect on the training. Also I compile the model with the updated weights but still no effect...

input_1 = Input(shape=(m,))
hidden_0 = Dense(units=10, activation='relu')(input_1)
predictions = Dense(2, activation='softmax')(hidden_0 )
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model = Model(inputs=input_1, outputs=[predictions, predictions])
alpha = K.variable(1.0)
beta = K.variable(0.0)
model.compile(optimizer=sgd,
              loss=['categorical_crossentropy', 'categorical_crossentropy'], loss_weights=[alpha, beta],
              metrics=['accuracy'])


class CustomValidationLoss(Callback):
    def __init__(self, alpha, beta):
        self.alpha = alpha
        self.beta = beta

    def on_epoch_end(self, epoch, logs={}):
        if epoch == 1:
            print "in model loss weight set"
            self.alpha = self.alpha * 0.0
            self.beta = self.beta + 1.0
            print (epoch, K.get_value(self.alpha), K.get_value(self.beta))
            model.compile(optimizer=sgd,
                          loss=['categorical_crossentropy', 'categorical_crossentropy'], loss_weights=[self.alpha, self.beta],
                          metrics=['accuracy'])

            sys.stdout.flush()


data = shuffle(pd.read_csv(os.path.join(dir_path, 'train_data.csv')))
y_mse = data['SOFT_LABEL'].values
y_mse = np.vstack([1 - y_mse, y_mse]).T
y = to_categorical(data['LABEL'].values, 2)
X = data.values

custom_validation_loss = CustomValidationLoss(alpha, beta)
model.fit(X, [y, y_mse],
          epochs=10,
          batch_size=1024,
          verbose=2, callbacks=[custom_validation_loss])

@kaeflint
Copy link

@tofigh- have you been able to figure out how to do it?

@rasto2211
Copy link

rasto2211 commented Apr 7, 2018

Here's the solution to this problem: #2595

@janzd
Copy link

janzd commented Jun 7, 2018

I couldn't save my model using the solution in #2595 due to JSON serialization error. See my workaround described in #9444 if you experience the same problem.

@jhmlam
Copy link

jhmlam commented Jul 6, 2018

Hi,

I am very new to keras. Hope my question don't bother too much :)
@fchollet

(simple) just recompile your model with a new loss_weight argument value when you want to adjust the loss weights.

Just want to make very sure, when the model are recompiled are the learned weights saved in RAM?
Ask so, because I am passing the a changing number to the loss function in a for loop.
e.g.

for epoch in range(len(100)):
                self.model.compile(optimizer=optimizer, loss= self.build_loss(**epoch = epoch**))
                m_loss = self.model.fit(x=[self.Z_full_transformed], y = [self.Z_full_transformed], epochs = 100, verbose = 0, batch_size=np.array(self.X_train).shape[0], shuffle= False)

In this way, will weights be "saved" for the self.model?

P.s. Thanks for everything you did for the keras package, it's wonderful and saved me a lot of time.

Best,
J

@tejal567
Copy link

TypeError: ('Not JSON Serializable:')
This error comes while saving the model. Any updates on this?
#9444

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:support User is asking for help / asking an implementation question. Stackoverflow would be better suited.
Projects
None yet
Development

No branches or pull requests

8 participants