You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
if regularizer is not None: regularizers = sum([tf.nn.l2_loss(variable) for variable in self.variables]) loss += (regularizer * regularizers)
it seems like that you have regularization on biases, as the self.variables included the biases
` variables = []
for w1,w2 in weights:
variables.append(w1)
variables.append(w2)
for b1,b2 in biases:
variables.append(b1)
variables.append(b2)`
The text was updated successfully, but these errors were encountered:
I think when batchsize is not 1, we should divide 'regularizers' by 2batch size just like following. What's your idea?
regularizers = sum([tf.nn.l2_loss(variable) for variable in self.variables])//(2batch_size)
Sorry for the very late reply.
The power of the regularizer is a hyperparameter like the batch size we can cover their relation in the param search instead of having a it implicitly in the code, don't we?
if regularizer is not None:
regularizers = sum([tf.nn.l2_loss(variable) for variable in self.variables])
loss += (regularizer * regularizers)
it seems like that you have regularization on biases, as the self.variables included the biases
` variables = []
for w1,w2 in weights:
variables.append(w1)
variables.append(w2)
The text was updated successfully, but these errors were encountered: