You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hi,
when i try to use the gird search method, i met the problem that the training process couldn't repeat/initialize. when i try another set of hyper-parameters, the training loss is still the last one.
for example,
when i run
for lr in [0.01,0.1,0.001]:
net = tt.practical.MLPVanilla(in_features, num_nodes, out_features, batch_norm,
dropout, output_bias=output_bias)
model = CoxPH(net, tt.optim.Adam)
model.optimizer.set_lr(0.01)
log = model.fit(x_train, y_train, batch_size, epochs, callbacks, verbose=True,
val_data=val, val_batch_size=batch_size)
Hi! This is a good question, and sorry for not coming back to you sooner.
To me is looks like the net is reinitialised fine, but I can see that you use the same set of callbacks for every iteration. I don't know what you callback contains, but I assume it might be an EarlyStopping object. If this is the case, you need to reinitialise this too, meaning you end up with something like this
hi,
when i try to use the gird search method, i met the problem that the training process couldn't repeat/initialize. when i try another set of hyper-parameters, the training loss is still the last one.
for example,
when i run
the result is
the net can't be initialized for the next training and i tried
del model
ordel log
and initialize the net , they also didn't work.The text was updated successfully, but these errors were encountered: