-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement DeepDrug model #68
Conversation
@kajocina please pass CI before we review this PR. You can either read the results in the GitHub browser or run |
I would suggest implementing this class only to have the functionality as originally described, then to consider subclassing it to have your additional functionality (adding in the context features may actually make it the same as one of the other models) |
@cthoyt I am getting CI errors from code in our tox dependencies so I can't really fix those, e.g.:
I did fix tox issues related to my commits now. |
@kajocina you can try running tox where it automatically rebuilds the virtualenvs with |
@cthoyt thanks for the tip, rebuilt tox and works ok, local CI was green now. I added the wrapper as you suggested, but it didnt really decrease the number of lines of code in total, only the number of assignments performed. |
Codecov Report
@@ Coverage Diff @@
## main #68 +/- ##
==========================================
+ Coverage 93.87% 94.01% +0.14%
==========================================
Files 29 29
Lines 832 869 +37
==========================================
+ Hits 781 817 +36
- Misses 51 52 +1
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _helper functions are great.
Now that we have all of the implementations, I can see some interesting abstractions we can provide! |
* WIP: model forward pass works, not tested * added dropout and batch norm * WIP: made DeepDrug example, not tested * moved to using layers only, not GCN torchdrug model * docstring * added dropout and made context feats optional * added DeepDrug unit test * deepdrug self attribute fix * docstring update * unpack method update (when no context feats used) * isort * fixed test setting (context_channels) * fixed testing without context * black * RST fix * RST fix * more pythonic loop + swap i to _ * removed context feat support in DeepDrug * removed context handling from testing DeepDrug * fixed examples DeepDrug, no context handling, decreased epochs 100->20 * removed unused import * used a wrapper for calling the same layers on pairs of batches * used a wrapper for calling the same layers on pairs of batches * docstring fix * Abstract process applied to left and right sides * Apply black * Cleanup Co-authored-by: Charles Tapley Hoyt <cthoyt@gmail.com>
* CASTER layer implementation - only supervised training stage - input dimensionality assumed to be correct * Apply black and reorganize * Move loss into its own module * Update caster.py * Reduce diff on citation * Implement DeepDrug model (#68) * WIP: model forward pass works, not tested * added dropout and batch norm * WIP: made DeepDrug example, not tested * moved to using layers only, not GCN torchdrug model * docstring * added dropout and made context feats optional * added DeepDrug unit test * deepdrug self attribute fix * docstring update * unpack method update (when no context feats used) * isort * fixed test setting (context_channels) * fixed testing without context * black * RST fix * RST fix * more pythonic loop + swap i to _ * removed context feat support in DeepDrug * removed context handling from testing DeepDrug * fixed examples DeepDrug, no context handling, decreased epochs 100->20 * removed unused import * used a wrapper for calling the same layers on pairs of batches * used a wrapper for calling the same layers on pairs of batches * docstring fix * Abstract process applied to left and right sides * Apply black * Cleanup Co-authored-by: Charles Tapley Hoyt <cthoyt@gmail.com> * Add GCN-BMP (#71) * linting * GCNBMP Scatter Reduction fix * Using Rel Conv Layers instead of RGCN Model (avoid unecessary sum readouts) * Added docstrings and fixed highway update implementation * Make number of relationship configurable * little help of black for linting * Cleaning upuseless imports * Sharing attention between right and left side * Adding reference to GCNBMP docstring * Type hinting everything * Fixing docstring in example * - Removing type hints in docstrings as they were added to signatures - Chunked iteration of the BMP backbone for better readability * Ading more-itertools as a dependecy * Using pairwise for encoder construction * Adding missing docstrings * Fixing linting and precommit hook * Fixing the citation back to what is in main * Tests,formatting,example * Tests,formatting,example * GCNBMP * Cleanup Co-authored-by: kcvc236 <kcvc236@seskscpg057.prim.scp> Co-authored-by: Rozemberczki <kmdb028@astrazeneca.net> Co-authored-by: kcvc236 <kcvc236@seskscpg059.prim.scp> Co-authored-by: Charles Tapley Hoyt <cthoyt@gmail.com> * Implement DeepDDI model (#63) * update: Add deepddi model * update: Add deepddi examples * update: Add deepddi test case * Style: deepddi model * Style: deepddi model * Style: deepddi_examples.py * Update deepddi.py * Update deepddi.py Co-authored-by: Charles Tapley Hoyt <cthoyt@gmail.com> * CASTER review fixes * flake8 fixes * CASTER: typing fix Co-authored-by: Andriy Nikolov <kgsq682@astrazeneca.net> Co-authored-by: Charles Tapley Hoyt <cthoyt@gmail.com> Co-authored-by: Piotr Grabowski <3966940+kajocina@users.noreply.github.com> Co-authored-by: Michaël Ughetto <michael.ughetto@astrazeneca.com> Co-authored-by: kcvc236 <kcvc236@seskscpg057.prim.scp> Co-authored-by: Rozemberczki <kmdb028@astrazeneca.net> Co-authored-by: kcvc236 <kcvc236@seskscpg059.prim.scp> Co-authored-by: walter <32014404+hzcheney@users.noreply.github.com>
Closes #14
Added the DeepDrug model, trying to copy as much as possible from their approach. The paper didn't use any context features, but I implemented the model so that it could be used in both ways. I ran the model with and without context feats on DrugCombDB with around 0.76 AUROC (no context features led to a minor drop in AUROC).
Provided an example that runs the model and a unit test which tests both the context and no-context modes of this model.