Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Report what tests have been run on a resource #12

Open
egonw opened this issue Feb 27, 2021 · 2 comments
Open

Report what tests have been run on a resource #12

egonw opened this issue Feb 27, 2021 · 2 comments
Assignees
Labels
user interface Improvements to the user interface

Comments

@egonw
Copy link
Contributor

egonw commented Feb 27, 2021

Reporting of tests done

It would be really nice if the GUI would report what tests have been run on a resource, similar to what YummyData does, so that we can learn how resources can be made more FAIR.

@egonw egonw added the user interface Improvements to the user interface label Feb 27, 2021
@vemonet
Copy link
Owner

vemonet commented Feb 28, 2021

Interesting, I already have a reporting system in place to report which files I failed to parse (mainly for unvalid RDF files)

It commits the reports in markdown of each run to a blank report branch:
https://github.com/vemonet/shapes-of-you/tree/report

I currently disabled the commits, because the reports where changing at each run an some could get quite big, there are so many unvalid RDF file out there! But the system works well

This could be easily updated to report other metadata

Ideally we would reuse an existing library that would generate this report (python would be perfect, Java or any other language would work if we can make it an executable).

If you have some direct pointers to existing tools, or even just direct link to the code files where some services implemented it (e.g. YummyData) I could take a look

Otherwise we would need to implement one.

The problem is that there are already multiple "FAIR validator" tools, but, as far as I know, most of them are not reusable, or even accessible (disclaimer: I host one of them, but cannot reuse it.).
Which is ironic as the core of the FAIR ecosystem does not seems FAIR itself, and cannot even be peer reviewed.

Honestly in my opinion it could be implemented as a simple python script/library (1 file, less than 500 lines most probably), and be easily improvable incrementally by the community to make the process of validating FAIRness more FAIR.

@vemonet
Copy link
Owner

vemonet commented Mar 1, 2021

For SPARQL endpoints YummyData clearly describe it's methodology and queries used here: http://yummydata.org/umaka-score.html

It could be reimplemented, especially that I don't take into consideration the endpoints that do not support graphs when I try to generate metadata

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
user interface Improvements to the user interface
Projects
None yet
Development

No branches or pull requests

2 participants