Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Annotation of inferred edges (new UI and axiom management) #407

Closed
balhoff opened this issue Feb 15, 2017 · 7 comments
Closed

Annotation of inferred edges (new UI and axiom management) #407

balhoff opened this issue Feb 15, 2017 · 7 comments

Comments

@balhoff
Copy link
Member

balhoff commented Feb 15, 2017

An idea developed in GPAD inference discussion with @ukemi and @vanaukenk: add a panel to Noctua containing a table of potential annotations (i.e. inferred direct relations, between gene products and GO term instance, from the list of available qualifier relations). Via the in-development GPAD export system, these wouldn't all necessarily make the cut for GPAD output, for example if evidence standards weren't yet met.

A curator may want to provide evidence directly on one of these inferred edges, if that evidence is of a different type than the evidence on the asserted edges supporting the inference. For example, if there is a chain X enables Y part_of Z, the panel would show a candidate annotation X involved_in Z. In a field in the panel, evidence could be added which applies specifically to the involved_in relation.

This opens questions of whether edges that have been annotated become permanent parts of the model, and whether they should be shown in the graph UI. They could be given a special annotation so that they are removed prior to reasoning, allowing the system to notice that they are no longer entailed.

I think this idea is applicable beyond the GPAD output use case. Even when working purely in LEGO it may be desired to put an evidence on a relation that could be inferred from other assertions; this can certainly be done with the current system but the proposed setup might facilitate creating and managing these.

cc @dosumis @cmungall

@dosumis
Copy link

dosumis commented Feb 15, 2017

Sounds reasonable, as long as inferred edges can easily be distinguished/filtered from the original (normalised) model. What about inference on nodes coming from reasoning in combination with the ontology. If there are many of these - will choosing evidence for these overload curators?

@ukemi
Copy link

ukemi commented Feb 15, 2017

Kimberly and I just created a hypothetical model where two gene products enable two different functions which contribute to a process that is part of another process. For the second process, we have put two evidence statements in the part_of relation, each individual evidence goes with the individual gene products. In this case the evidences are the same as the evidence on the separate part-of relations between the functions and the first process, but we think that will not always be the case.

Another possibility that Kimberly came up with was to be able to add the gene products to the individual evidence statements so they could be sorted. This is similar to the table idea, but everything is done in the pop-up box.

Here is the model:
http://noctua.berkeleybop.org/editor/graph/gomodel:586fc17a00001081

@cmungall
Copy link
Member

I'm not so keen on having GPAD leak into LEGO models in this way. I think the general idea of materializing inferred linkages is potentially OK, but these should be limited to relations that would normally be used in a LEGO model (no involved-in shortcuts). But even here we have to be careful, e.g. what happens when a supporting link is removed? If we do a cascading delete we potentially lose useful information. If we don't, we have stale information.

@kltm kltm added this to the wishlist milestone Feb 16, 2017
@ukemi
Copy link

ukemi commented Feb 16, 2017

Since this is all a paradigm shift anyways, the other option would be to use a new (or maybe existing) evidence code to show that the annotations is the result of reasoning across the model. Maybe the evidence could be something like 'ILR' (infered from logical reasoning). I think if we do that, maybe the reference should be the model itself. This allows us to follow the provenance and trace everything back to the original assertions. This is comforting to me because in the above model, the evidence between the two processes should actually be the evidence that the first is part of the second. If that is supported by the mutant phenotypes of the two gene products, then the evidence as we have modeled it is correct.

@dosumis
Copy link

dosumis commented Feb 16, 2017

the other option would be to use a new (or maybe existing) evidence code to show that the annotations is the result of reasoning across the model. Maybe the evidence could be something like 'ILR' (infered from logical reasoning). I think if we do that, maybe the reference should be the model itself.

It could potentially be better than that. Jim's code pulls back all edges used for a particular assertion and the evidence on those. Do you think a link to a report of this would be useful?

@ukemi
Copy link

ukemi commented Feb 16, 2017

Maybe. We have to ask ourselves who uses (will be using evidence) and how. If it is just a bench biologist who wants to investigate why an annotation is made, I think the most User-friendly view should be presented (ideally an interactive consumer's view of the model). If we think it is going to be computational people for the most part, then a report that is parsable would be better. From an AGR perspective, what would we want to present to our Users as the evidence for a given statement?

@kltm
Copy link
Member

kltm commented Jun 14, 2017

So...after today's call, are we still interested in pursuing this? This reads a bit like it would be possible made redundant with the proposed evidence hinting...

@kltm kltm closed this as completed Oct 18, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants