Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

grid_search issue #28

Closed
yuj23 opened this issue Mar 15, 2020 · 2 comments
Closed

grid_search issue #28

yuj23 opened this issue Mar 15, 2020 · 2 comments

Comments

@yuj23
Copy link

yuj23 commented Mar 15, 2020

hi,
when i try to use the gird search method, i met the problem that the training process couldn't repeat/initialize. when i try another set of hyper-parameters, the training loss is still the last one.

for example,
when i run

for lr in [0.01,0.1,0.001]:
    net = tt.practical.MLPVanilla(in_features, num_nodes, out_features, batch_norm,
                              dropout, output_bias=output_bias)
    model = CoxPH(net, tt.optim.Adam)
    model.optimizer.set_lr(0.01)
    log = model.fit(x_train, y_train, batch_size, epochs, callbacks, verbose=True,
                val_data=val, val_batch_size=batch_size)

the result is

0:	[0s / 0s],		train_loss: 4.8275,	val_loss: 3.8472
1:	[0s / 0s],		train_loss: 4.7032,	val_loss: 3.8083
2:	[0s / 0s],		train_loss: 4.6465,	val_loss: 3.8090
3:	[0s / 0s],		train_loss: 4.6266,	val_loss: 3.8291
4:	[0s / 0s],		train_loss: 4.6113,	val_loss: 3.8204
5:	[0s / 0s],		train_loss: 4.6186,	val_loss: 3.8120
6:	[0s / 0s],		train_loss: 4.5811,	val_loss: 3.8143
7:	[0s / 0s],		train_loss: 4.6007,	val_loss: 3.8200
8:	[0s / 0s],		train_loss: 4.5901,	val_loss: 3.8217
9:	[0s / 0s],		train_loss: 4.5858,	val_loss: 3.8167
10:	[0s / 0s],		train_loss: 4.5680,	val_loss: 3.8179
11:	[0s / 0s],		train_loss: 4.5737,	val_loss: 3.8225

0:	[0s / 0s],		train_loss: 4.8113,	val_loss: 3.8552
0:	[0s / 0s],		train_loss: 4.7864,	val_loss: 3.8699  

the net can't be initialized for the next training and i tried del model or del log and initialize the net , they also didn't work.

@havakv
Copy link
Owner

havakv commented Mar 17, 2020

Hi! This is a good question, and sorry for not coming back to you sooner.

To me is looks like the net is reinitialised fine, but I can see that you use the same set of callbacks for every iteration. I don't know what you callback contains, but I assume it might be an EarlyStopping object. If this is the case, you need to reinitialise this too, meaning you end up with something like this

for lr in [0.01, 0.1, 0.001]:
    net = tt.practical.MLPVanilla(in_features, num_nodes, out_features, batch_norm,
                              dropout, output_bias=output_bias)
    model = CoxPH(net, tt.optim.Adam)
    model.optimizer.set_lr(0.01)
    callbacks = [tt.callbacks.EarlyStopping()]
    log = model.fit(x_train, y_train, batch_size, epochs, callbacks, verbose=True,
                    val_data=val, val_batch_size=batch_size)

Does this solve the issue?

@yuj23
Copy link
Author

yuj23 commented Mar 18, 2020

Yes, it works for the issue!
thank you.

@yuj23 yuj23 closed this as completed Mar 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants