Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transform tf_robust to tensorgraph #1086

Merged
merged 15 commits into from Feb 17, 2018
Merged

Conversation

miaecle
Copy link
Contributor

@miaecle miaecle commented Feb 12, 2018

Working on implementing tensorgraph style robust multitask network and progressive network.

@rbharath
Copy link
Member

@miaecle Awesome to see this! Just merged in #1082 which removes the tensorflow_models folder. You should be able to fix the merge conflict by moving over your models to the tensorgraph folder. Sorry for the extra work.

@miaecle miaecle changed the title [WIP] Transform tf_robust and progressive network to tensorgraph Transform tf_robust to tensorgraph Feb 14, 2018
@miaecle
Copy link
Contributor Author

miaecle commented Feb 14, 2018

@rbharath No problem! Just got it fixed, progressive network seems like some work, let's merge the robust multitask first.

@@ -33,4 +33,4 @@
from deepchem.molnet.dnasim import simulate_single_motif_detection

from deepchem.molnet.run_benchmark import run_benchmark
from deepchem.molnet.run_benchmark_low_data import run_benchmark_low_data
#from deepchem.molnet.run_benchmark_low_data import run_benchmark_low_data
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since low data models are moved to contrib, I temporarily commented out these parts in molnet, could do modifications once low data models are back.

L2Loss(in_layers=[task_label, layer, task_weight]))
self.create_submodel(
layers=task_layers, loss=weighted_loss, optimizer=None)
# Weight decay not activated
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weight decay seems to be deactivated in the original implementation. I could not find a good way to merge in this function as well, any good idea?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should be feasible to to just add on a new loss term manually if needed. That said, I don't believe weight decay made a big difference so I'm fine to proceed without it.

@@ -928,6 +928,7 @@ def get_train_op(self):
optimizer = self.graph.optimizer
else:
optimizer = self.optimizer
# Should we keep a separate global step count for each submodel?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For progressive model, using a shared global step doesn't seem natural(in case of learning rate decay), defining a separate parameter might be better.

alpha_init_stddevs=[.02],
batch_size=n_samples)

# Fit trained model
model.fit(dataset, nb_epoch=20)
model.save()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to mention that progressive model currently cannot be saved, still trying to dig into it but I suspect the submodel setting might be the cause.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@miaecle Would you mind raising a new issue documenting the error? This sounds like a general problem we should fix. We can do this in a different PR though.

@coveralls
Copy link

coveralls commented Feb 16, 2018

Coverage Status

Coverage increased (+0.6%) to 80.388% when pulling eda8ed1 on miaecle:tf_robust into 6f86377 on deepchem:master.

@rbharath
Copy link
Member

@miaecle This is great work! Thanks for taking on the conversion task :)

I added a couple comments, but I think these are all fine to address in different PRs.

LGTM

Copy link
Member

@rbharath rbharath left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Realized I forgot to post the review...

L2Loss(in_layers=[task_label, layer, task_weight]))
self.create_submodel(
layers=task_layers, loss=weighted_loss, optimizer=None)
# Weight decay not activated
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should be feasible to to just add on a new loss term manually if needed. That said, I don't believe weight decay made a big difference so I'm fine to proceed without it.

alpha_init_stddevs=[.02],
batch_size=n_samples)

# Fit trained model
model.fit(dataset, nb_epoch=20)
model.save()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@miaecle Would you mind raising a new issue documenting the error? This sounds like a general problem we should fix. We can do this in a different PR though.

@rbharath rbharath merged commit 835cc42 into deepchem:master Feb 17, 2018
@miaecle miaecle deleted the tf_robust branch April 10, 2018 18:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants