-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Figure: Prior Knowledge #107
Comments
@dvenprasad I am including here the background info for this figure, including any appropriate links I came across. I am also going to quote what I think are the most relevant passages from the manuscript here for your reference. The relevant section of the manuscript covers the following topics:
|
This section of the manuscript is divided up into two headings: Knowledge graphs and Transfer, multitask, and few-shot learning. I think the main takeaways are somewhat covered above, but I also drew something up for each of those sections that I will include below. These drawings are not necessarily intended to guide development of this figure, but instead to communicate the takeaways. I'll post over on #108 shortly, but did want to note that I'm not sure we need this figure and the "putting it all together" figure tracked on #108 which would cover two specific studies using transfer learning in rare diseases (DeepProfile and MultiPLIER) that unify some of the other concepts introduced in the manuscript. As a result, I'm not totally sure what the main takeaway message is yet. For context, I'm using a "database" as my representation of a model here because that's consistent with the tentative sketch of the statistical technique figure #106 (comment). Knowledge graphsI'm worried that this is putting the cart before the horse (the model before the what the model is supposed to be doing) ☝️ in its current form. Transfer, multitask, and few-shot learningHopefully this figure makes it somewhat clear why we would put these approaches under the same header! What I didn't include was information about supervised vs. unsupervised tasks, but I think that might muddy things a bit 💭 I'm also very wary of including anything in this figure that implicitly references a specific neural network architecture. |
I think these figures are looking great. One comment about similarity metrics: they are often (maybe always?) on a 0 to 1 scale, and are either representative of similarity or distance, not both (e.g. I don't think that the center should be the "origin" for the similarity bars). Could also use a stylized heatmap representation of distance. Also, thinking about these two concepts (transfer learning and few-shot learning) I think a key distinction that we should highlight in this feature is that transfer learning is leveraging the knowledge of large-complex datasets (and may not have the disease you're studying represented in it at all) to perform a prediction (or related) task, while few-shot or one-shot learning is using the one to few examples of the disease(s) you are studying to perform a prediction (or related) task. |
I think some of the imagery for few shot learning that I've found helpful usually gives you some kind of intuition about why this is a possible strategy. (Granted it helps that they are from the natural image domain!) Here are some examples:
This one is tricky because we don't want to talk about architectures, etc. – I'm wondering if we even need to get into the part about similarity? I'm not sure it's necessary. I also know we're trying to keep things at a high-level of abstraction in many cases, but is there a place for having these figures be a little more specific (e.g., we use some kind of representation of medical images)? |
Transfer Learning
Few shot learning |
Updated figures based on Monday's call. Transfer Learning
Few Shot Learning
|
It'd be grand to get a figure on how prior knowledge/data can be useful, especially for rare diseases.
The text was updated successfully, but these errors were encountered: