-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unsupervised feature construction and knowledge extraction from genome-wide assays of breast cancer with denoising autoencoders. #6
Comments
As an author of this paper, I am not the best person to review it. Would love to have at least one non-author pitch in. Maybe @hussius since I know you wrote a blog post about it. Biology: Computational Methods:
Results
Example of unsupervised method, analysis of transcriptional regulation, potentially some discussion around pathway activities that may be relevant to the review. |
I can give it a shot. |
Does a single-layer model fit into a review on "deep" learning? |
@michaelmhoffman : I am not necessarily a proponent of building models that are sufficiently complex to trigger some sort of arbitrary "deep" nomenclature. I'm most interested in methods that include some strong data-driven feature construction work. In the scope of this review, probably adding the "with neural networks" constraint. The thing that I like about true deep architectures is that feature construction gets baked in to the learning algorithm. The thing that I like about this "shallow learning" architecture is that a biologist can take a look at it and interpret features. I guess I'd say - personally - if it passes the threshold of data-driven feature construction with neural networks then I think it's the type of research that I think will be primed for data-intensive discoveries. |
@cgreene Fully agree with you. Only caution that I think is again not stressed enough in current reviews is the interpretability of even a single layer model should be done very cautiously. Neural nets learn distributed representations and even though individual neurons/filters may appear interpretable, they should not be overinterpretted as "this filter is a CTCF motif" like some papers do. There are often many filters that collectively capture a single predictive pattern like a motif. There are ways to re-derive these. Looking at filters for an intuitive feel of what the network is great. Using individual filters outside of the network is dangerous and wrong IMHO. On a side note, sorry if I'm being negatively critical of too many things :). Just feel like the use of deep nets in compbio is still in its infancy and if we can avoid propagating suboptimal practices we should do that through this review and papers. |
@akundaje : Definitely important not to over-interpret outside of the context of the network. No problem & totally agree on infancy. I think we need people to take the optimistic and pessimistic sides on many topics if we want to put together a solid perspective. |
Yes! If the answer to our question is that the things that would need to be On Tue, Aug 9, 2016, 5:43 PM Mikael Huss notifications@github.com wrote:
|
I've labeled this paper for the 'study' component. It's not receiving more discussion at this point so I've closed it. We're now using 'open' papers only for items undergoing active discussion. @akundaje - maybe you could contribute a paragraph to the study section on the hazards of over-interpretation? I agree with @hussius that this is an important topic in the field. |
Sure. I can help with that next week. On Oct 14, 2016 9:23 AM, "Casey Greene" notifications@github.com wrote:
|
Awesome! @agitter : when you stub in the "study" section can you make sure there's a spot for interpretation of these models? We may instead end up putting it in our concluding/general thoughts, but that seems like a good home for now. |
@cgreene Sure, I can include an interpretation subsection in 'study' for now. Soon we should have a better idea of whether all of the meta-commentary (interpretation, evaluation, pitfalls, etc.) fits in the study/treat/categorize sections or warrants a separate discussion section. |
Dear Casey, |
Hi @rezahay: If I recall correctly Jie defined the specified thresholds. Then she determined, at that threshold, what the balanced accuracy would have been if that was the cut-point for a classifier. The same node + threshold then gets tested on the independent test dataset. |
Dear Casey, |
Paper needs to be read carefully for relevance.
http://dx.doi.org/10.1142/9789814644730_0014
The text was updated successfully, but these errors were encountered: