New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transform tf_robust to tensorgraph #1086
Conversation
@rbharath No problem! Just got it fixed, progressive network seems like some work, let's merge the robust multitask first. |
@@ -33,4 +33,4 @@ | |||
from deepchem.molnet.dnasim import simulate_single_motif_detection | |||
|
|||
from deepchem.molnet.run_benchmark import run_benchmark | |||
from deepchem.molnet.run_benchmark_low_data import run_benchmark_low_data | |||
#from deepchem.molnet.run_benchmark_low_data import run_benchmark_low_data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since low data models are moved to contrib, I temporarily commented out these parts in molnet, could do modifications once low data models are back.
L2Loss(in_layers=[task_label, layer, task_weight])) | ||
self.create_submodel( | ||
layers=task_layers, loss=weighted_loss, optimizer=None) | ||
# Weight decay not activated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Weight decay seems to be deactivated in the original implementation. I could not find a good way to merge in this function as well, any good idea?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it should be feasible to to just add on a new loss term manually if needed. That said, I don't believe weight decay made a big difference so I'm fine to proceed without it.
@@ -928,6 +928,7 @@ def get_train_op(self): | |||
optimizer = self.graph.optimizer | |||
else: | |||
optimizer = self.optimizer | |||
# Should we keep a separate global step count for each submodel? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For progressive model, using a shared global step doesn't seem natural(in case of learning rate decay), defining a separate parameter might be better.
alpha_init_stddevs=[.02], | ||
batch_size=n_samples) | ||
|
||
# Fit trained model | ||
model.fit(dataset, nb_epoch=20) | ||
model.save() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to mention that progressive model currently cannot be saved, still trying to dig into it but I suspect the submodel setting might be the cause.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@miaecle Would you mind raising a new issue documenting the error? This sounds like a general problem we should fix. We can do this in a different PR though.
@miaecle This is great work! Thanks for taking on the conversion task :) I added a couple comments, but I think these are all fine to address in different PRs. LGTM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Realized I forgot to post the review...
L2Loss(in_layers=[task_label, layer, task_weight])) | ||
self.create_submodel( | ||
layers=task_layers, loss=weighted_loss, optimizer=None) | ||
# Weight decay not activated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it should be feasible to to just add on a new loss term manually if needed. That said, I don't believe weight decay made a big difference so I'm fine to proceed without it.
alpha_init_stddevs=[.02], | ||
batch_size=n_samples) | ||
|
||
# Fit trained model | ||
model.fit(dataset, nb_epoch=20) | ||
model.save() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@miaecle Would you mind raising a new issue documenting the error? This sounds like a general problem we should fix. We can do this in a different PR though.
Working on implementing tensorgraph style robust multitask network and progressive network.