New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why FAIR*? #8
Comments
Agree there is danger of FAIRification buzzword bingo – the ontology is a nice addition to the SPAR collection to also cover reviews. So the question is, how do we make reviews FAIR? It's pretty sure most of them are not today, they are generally neither findable, accessible, interoperable and certainly not reusable. And this ontology might be needed in order to do so. https://linkedresearch.org/calls is proposing to use Linked Data Notification for such replies. The Web Annotation Model has motivations such as Which principles?OK, let's try to relate to the actual FAIR guidance principles
Linked to repository - where do reviews go and how are they retrieved? Not specified in ontology, but in user patterns and examples. One problem: Reviews might live in more than one place, although yes, they are generally in some tracking system like EasyChair. Some journals like F1000Research and PeerJ assign DOIs to each review, e.g. https://doi.org/10.5256/f1000research.13348.r25610 or https://doi.org/10.7287/peerj-cs.132v0.2/reviews/3
I guess here the FAIR Review ontology comes in, although combinations of schema.org/DCTerms, OA and PROV probably would do it as well if their combination was well documented.
I think in this case both of the review and the reviewed document. Not clear from ontology - need further guidance.
Would need some kind of repository of reviews - per venue is not sufficient - e.g. I would want to collate all my reviews; someone else wants to find all reviews of a particular article. I think it probably should be a FAIR principle that metadata could be in multiple locations.
As we all know, this is a solved problem, let's just do http/https with permalinks like https://w3id.org
This would be important if we want to describe non-open reviews - e.g. where it's only accessible that you HAVE done a review, but not what it is (perhaps not even of which article). But those with access should be able to see it (e.g. member of programme committe). Perhaps saying if a review is open or not would be important metadata.
Separation of concern - so the annotations using the FAIRReview ontology must be possible to separate from the review text itself. This just means that the ontology can be used in other RDF formats than RDFa.
Is it RDFa or JSON-LD like in Schema.org? I think it must be recommended to be usable - "any RDF anywhere" is too vague. Not solved by ontology.
So the vocabularies themself must be FAIR.. hence FAIR* Reviews ontology is open etc. I think this one is ticked already.
Not sure.. is this satisfied by using SPAR ontology? Not sure how you would follow which links.
Well, we'll have to review the attributes of the ontology.. not quite sure yet, as it seems to have
Not defined now, but very important - what's the license of the metadata vs license of the review text vs license of the reviewed manuscript?
Not clear from ontology how this should be expressed. Further guidance needed.
Well, this one will have to be judged by the "community". :) |
BTW; I have a couple of Open Reviews listed on https://www.research.manchester.ac.uk/portal/en/researchers/stian-soilandreyes(0b55a0bb-452c-455a-8af6-fe7cc4094c83)/activities.html (hopefully ultimately linking you to my gists) - making them loosely Accessible, but not Findable (e.g. "reply" to a publication/DOI). It would be interesting to see how these can be marked up (ignoring for the moment that Github Gists don't really support linked data annotations in MarkDown) - as your current ontology approach seems to assume that the annotations are created as part of a fixed publication-reviewing workflow like in EasyChair, and not for independent or "self-opened" reviews, or continual open reviews like in F1000Research. As for Reusable - we can look at articles that are initially rejected and then since submitted elsewhere in a revised version - that would be a case for review-to-review citations which generally do not happen today. |
@AlasdairGray I would love to answer your question, but I think @stain analysis does that better than I could :) The vocabulary by itself does not guarantee that the generated reviews align with the FAIR data principles. Additional infrastructure for hosting them as linked data is necessary, following those principles. We are currently working on that. We wanted to release a 0.1 version so as to generate discussion and get feedback. |
In that case, I think the name of the ontology is misleading. It would be far better to call it a academic review ontology, or something of that sort, and leave the FAIR story for your surrounding infrastructure. |
@stain thanks for your discussion about the model, it is highly interesting. As you have stated throughout your comments, the ontology by itself can not guarantee the review FAIR-ness (no vocab can do that actually). We are working on a system for that.
Yes, that is true, we discussed quite a lot about the naming, but even when the ontology itself does not "FAIRify" a review, is part of the solution. We might change it in the future.
We were aware of it :). We found them when looking for opencitations reviews.
Not sure what you mean by this. One of the goals we have is to support self-publication of reviews, in which the author and the entity requesting the review are the same individual.
Interested in joining our beta testers team? :) |
What is the connection between this ontology and the FAIR data principles?
The text was updated successfully, but these errors were encountered: