Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refine our guiding question. #88

Closed
cgreene opened this issue Aug 22, 2016 · 20 comments
Closed

Refine our guiding question. #88

cgreene opened this issue Aug 22, 2016 · 20 comments

Comments

@cgreene
Copy link
Member

cgreene commented Aug 22, 2016

Throughout our initial review of papers, our guiding question has been: What would need to be true for deep learning to transform how we categorize, study, and treat individuals to maintain or restore health?

I think we should take a bit of time to start discussing this. I expect that we will probably want to refine it (e.g. what does 'transform' even mean in this context). We should bring in examples where pertinent - for example I might reference #55 if I wanted to discuss the role that new in silico drug development strategies could play in this transformation.

@cgreene
Copy link
Member Author

cgreene commented Aug 22, 2016

@agitter @gwaygenomics @brettbj @michaelmhoffman @sw1 @akundaje
Trying to tag those who have participated to fully review at least one paper thus far to make sure that we get a chance to participate in the refinement phase.

@agitter
Copy link
Collaborator

agitter commented Aug 23, 2016

A lower bar than 'transform' would be to ask what it would take to have the practitioners who categorizes, study, and treat individuals use or benefit from deep learning approaches. The answer could involve the quality of predictions as well as the accessibility and interpretability of the models.

Working backwards, we can also think about what questions are useful for organizing the papers we want to discuss. As @gwaygenomics said (#87) we want to avoid reproducing existing reviews, which could be harder if we group papers by 'imaging', 'EHR', 'genomics', etc. Two immediate thoughts:

@hussius
Copy link

hussius commented Aug 23, 2016

Jumping in here although I haven't contributed to the reviewing. I agree that it would be desirable not to group according to imaging/genomics/... Both your suggested groupings sound attractive to me. On first reading, I slightly preferred the second, because as you pointed out, the first one could turn into a too-specific ML-focused paper (although that would be interesting for some readers). However, the problem with the second one is that it could turn out to be difficult to judge how relevant the papers really are to treating individuals - I imagine there is often quite a gap between what authors claim the relevance is and what it actually is in practice. This may not be a big problem but can be worth thinking about if we are aiming at a more balanced review as discussed elsewhere among the issues.

@cgreene
Copy link
Member Author

cgreene commented Aug 23, 2016

I really like the idea of the second if we commit to arguing both sides of
the issue for the papers that we particularly highlight. That is - we don't
take the authors at their word, but we include evidence on both sides. It
would be a bit less traditional, but I think a somewhat more valuable
review.

I have no interest in writing a cheerleading review for the field. If, at
the end of this, we think deep learning provides only an incremental
advance in most areas then I want us to feel comfortable saying that.

On Tue, Aug 23, 2016 at 8:25 AM Mikael Huss notifications@github.com
wrote:

Jumping in here although I haven't contributed to the reviewing. I agree
that it would be desirable not to group according to imaging/genomics/...
Both your suggested groupings sound attractive to me. On first reading, I
slightly preferred the second, because as you pointed out, the first one
could turn into a too-specific ML-focused paper (although that would be
interesting for some reader). However, the problem with the second one
is that it could turn out to be difficult to judge how relevant the papers
really are to treating individuals - I imagine there is often quite a
gap between what authors claim the relevance is and what it actually is in
practice. This may not be a big problem but can be worth thinking about if
we are aiming at a more balanced review as discussed elsewhere among the
issues.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#88 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAhHs280NSK0YDtBWMnWuUWgLtNQxQYLks5qiuabgaJpZM4JqVy9
.

@gwaybio
Copy link
Contributor

gwaybio commented Aug 23, 2016

I also think @agitter second grouping is a good idea. To expand upon it a bit:

I think the guiding question remains strong:

What would need to be true for deep learning to transform how we categorize, study, and treat individuals to maintain or restore health?. Let's break it up.

  1. What would need to be true for deep learning to transform how we:
    1. Categorize - How can deep learning improve categorization?
      • This is where we talk about EHRs and unsupervised learning in genomics
    2. Study - How can deep learning improve how we study human disease?
    3. Treat - How can deep learning improve how we treat individuals
      1. To Maintain Health
        • Points to virtual screening papers
      2. To Restore Health
        • Points to drug prediction papers, unsupervised approaches to cluster "hidden responders" etc.

If our review were to be structured this way, we may also need to touch upon deep learning frontiers - how current algorithms in other fields (vision, machine translation, text, etc.) are pushing the boundaries of domain specific performance. Biology has already benefited from adopting state of the art approaches - which are quickly being surpassed in performance. Are these performance gains specific to the field of study? This is where we can talk about how deep learning in biology is different (and often more complicated) than these other "traditional" tasks.

What makes deep learning in biology challenging: Other reviews have covered this in detail but we still need to discuss black box, imbalanced classes, impure gold standards. Then we can talk about how current papers are solving these.

Also @cgreene said:

I have no interest in writing a cheerleading review for the field. If, at the end of this, we think deep learning provides only an incremental advance in most areas then I want us to feel comfortable saying that.

I agree 100% with this and am finding myself more on the "incremental advance" camp these days. I think the challenges are tough and there don't currently exist good enough solutions. Unless overcome, deep learning will have the most impact in the "transform how we study" part above and less impact in the "transform how we categorize/treat"

@cangermueller
Copy link

Hi guys,

I am one of the authors of #47, and appreciate your efforts to review deep learning for biology. However, I think there are now already plenty of reviews. I know of the following:

At the end, there should not be more reviews than publications ;-). Instead of an additional review, it might be more useful for the community to provide a central place with references, pretrained models, and datasets. I am thinking about a website like www.deeplearning.bio which provides readling lists for different categories (e.g. genomics, proteomics, drug discovery, ...), pretrained models (e.g. CNNs and RNNs pretrained on DNA or protein sequences), and datasets (e.g. labeled sequence fragments). Interested people could than easily download models and datasets and start to play around.

If you really like to write an article, you could write a arxiv paper about the website itself.

Best,
Christof

@akundaje
Copy link
Contributor

I largely agree with Christof that there are actually too many reviews of Deep learning for biology and not that many papers. So its not very clear that there is a huge missing component across all these reviews. But this open-source review effort has been really useful.

@cangermueller We are working on a hands-on primer focused on simple convolutional supervised models on genomic sequence for starters here http://kundajelab.github.io/dragonn/ (paper is submission) and in that we have also proposed the idea of a model zoo for the community for well-defined, searchable, interoperable models and so on. A stub is here https://github.com/kundajelab/dragonn/wiki/Model-Zoo that we plan to use for models from our lab and eventually open it out it others. Just wanted to make you aware of it so that we can potentially collaborate on something like this rather than replicate or compete. The Data and Model Zoo should really be a community effort that really makes these models replicable and interoperable (which is currently very difficult to do).

@cgreene
Copy link
Member Author

cgreene commented Aug 30, 2016

I agree with both of you that there are a number of broad reviews and
surveys. The invitation that we received was for something that's more on
the border of a perspective and a review. This gives us the opportunity to
address some pointed questions that I think should be discussed.

Right now there are numerous papers presenting, essentially, "deep learning
for X." Is deep learning so transformational that this is interesting in
and of itself? Or, alternatively, are there certain areas where we expect
the approaches to be more suited. Are there examples in biology where we
have seen or expect transformative approaches - things that we couldn't
have done without these techniques?

I've seen a large number of broad surveys (including #47 which is a very
nice article), but I haven't seen an article that primarily wrestles with
these tough questions. Since we have up to 12,000 words - I wonder if we
can deeply address these topics and add value where these items are lacking.

On Tue, Aug 30, 2016 at 4:01 PM Anshul Kundaje notifications@github.com
wrote:

I largely agree with Christof that there are actually too many reviews of
Deep learning for biology and not that many papers. So its not very clear
that there is a huge missing component across all these reviews. But this
open-source review effort has been really useful.

@cangermueller https://github.com/cangermueller We are working on a
hands-on primer focused on simple convolutional supervised models on
genomic sequence for starters here http://kundajelab.github.io/dragonn/
(paper is submission) and in that we have also proposed the idea of a model
zoo for the community for well-defined, searchable, interoperable models
and so on. A stub is here
https://github.com/kundajelab/dragonn/wiki/Model-Zoo that we plan to use
for models from our lab and eventually open it out it others. Just wanted
to make you aware of it so that we can potentially collaborate on something
like this rather than replicate or compete. The Data and Model Zoo should
really be a community effort that really makes these models replicable and
interoperable (which is currently very difficult to do).


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#88 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAhHs3U8TaSYqG3_M45TZhF_ppFBEzmWks5qlIwlgaJpZM4JqVy9
.

@cangermueller
Copy link

@akundaje, DragonNN looks great and is almost exactly what I mean! I do not want to compete, it was just an idea. I am looking for your paper.

  • Christof

@michaelmhoffman
Copy link
Contributor

Right now there are numerous papers presenting, essentially, "deep learning for X." Is deep learning so transformational that this is interesting in and of itself?

Not necessarily. But it seems like a very promising technique.

It might be instructive to quantitatively look at the sorts of gains from deep learning found in, for example, image classification, versus the previous state of the art. Compare to the gains found in some genomics problem. Have researchers spent even a fraction of the time on deep learning problems in biology as they have in image classification? The success of "deep learning" in other machine learning problems is not just due to an inherent advantage of making neural networks deep. The leading lights of deep learning must have spent many person-years trying different model architectures, training architectures, and so on.

@cgreene
Copy link
Member Author

cgreene commented Sep 15, 2016

In #95 @w9 brings up data challenges that bio domains have to deal with that are not necessarily encountered in areas where deep learning has had the most success. This is not something that we explicitly discussed above, but I agree that this is something we should deal with. We might want to bring this up in the introduction, and then tackle it within each of the study, categorize, and treat sections - perhaps taking particular care to highlight how particularly successful methods have tackled these challenges.

@agitter
Copy link
Collaborator

agitter commented Sep 16, 2016

A question I have about the scope is who the intended audience is. For example, the review #47 assumes very little of the audience and introduces supervised learning, the basics of neural networks, practical considerations like optimization, etc. Is this review aimed at that same audience? Or someone who has read some of the primary literature or a very good broad review and now wants to learn more? Or someone familiar with deep learning but not the biomedical applications and challenges?

It could be worthwhile for us to catalog topics that are well-covered in existing reviews that we should intentionally avoid. I still haven't read all of the reviews listed above or in our issues.

@cgreene
Copy link
Member Author

cgreene commented Sep 16, 2016

@agitter : I think we need to be broadly accessible - no assumption of the audience. Here's the description of a general review. I do think we should plan to explain what we need - e.g. intuition behind the basics of neural networks.

"Reviews: Review articles should be around 8000 words, but there is some scope for flexibility. They should aim to interest communities working at the physical sciences/life sciences interface and should cover the latest developments in an area of cross-disciplinary research. These articles should put such research in a wider context and be written in a style that will make them accessible to readers in a wide range of disciplines. Reviews will normally be published by invitation, although we are keen to receive proposals for prospective articles from authors. Complete literature surveys are not encouraged."

My understanding of the invited headline review is that we can have ~4000 more words and a bit more freedom to provide perspective.

Edit - add information for authors link.

@cgreene
Copy link
Member Author

cgreene commented Oct 19, 2016

In #82 we realized that we may want to lengthen treat to develop treatments for or something of the sort. Otherwise, treat seems like it would often be a superset of categorize.

@agitter
Copy link
Collaborator

agitter commented Oct 21, 2016

I propose we add a general section that spans study/categorize/treat to discuss things like evaluation (#109), interpretation, hardware limitations, limitations of biomedical datasets (sample size, quality, etc.), and other issues that could impact the future success of deep learning in this domain.

@cgreene
Copy link
Member Author

cgreene commented Oct 24, 2016

@agitter : totally agree with a general section. Thanks!

@traversc
Copy link
Contributor

Hi everyone, I'd like to propose an idea for organizing the papers in the review. Similar to @agitter's comment on "stratifying papers based on what neural networks contribute to the problem", in addition papers could be categorized by the type of neural network architecture used. From the discussions on the literature here, there seems to be several common types of neural networks used:

  1. Fully connected NNs
  2. Convolutional neural networks
  3. Autoencoders
  4. Recurrent networks (and LSTMs)

There seems to be some themes for different types of biological problems matching different types of architecture.

Another idea would be to extract features from each paper (method used, type of biological problem, etc.) and perform some clustering on those features, and organize based on those clusters. It would be very meta!

@agitter
Copy link
Collaborator

agitter commented Oct 24, 2016

@cgreene #118 adds a general section

@traversc To keep the main focus on the guiding question at the top of this thread, we might first stratify methods by the biomedical goal (categorize, study, treat) and then discuss what different architectures lend to the problems within those sections. This is all still evolving though. @evancofer also expressed interest in a general deep learning discussion. Perhaps both of you could work on that for the Introduction? #108 adds a placeholder for this.

Formally classifying papers would be fun. It would also be labor-intensive!

@traversc
Copy link
Contributor

Hi @agitter, I started tabulating (#119) the papers to get an idea of the types of biological problems people are interested in. I'll categorize the papers and then write a short paragraph or two listing/explaining the types of biological problems in the current literature. Is that OK?

@agitter
Copy link
Collaborator

agitter commented Oct 27, 2016

@traversc Let's move the discussion about types of biological problems to #121

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants