Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finding the exact testing procedure executed for each metric #60

Closed
rgiessmann opened this issue Nov 14, 2019 · 3 comments
Closed

Finding the exact testing procedure executed for each metric #60

rgiessmann opened this issue Nov 14, 2019 · 3 comments

Comments

@rgiessmann
Copy link

Dear team,

I really enjoy the FAIRevaluator and think you have chosen a great architecture to implement testing of current and upcoming FAIRmetrics.

I understood that I can look into the markdown files for human-readable description of the metrics; what I did not get is where I can find the source code of the metric evaluators themselves. Are there links to the source code somewhere, or could we integrate those links into the human- and machine-readable descriptions?

Thanks in advance for taking the time to answer this newbie question! If my question is valid, it would be great to include this into the README as well (I am happy to contribute!).

Best,
Robert

@markwilkinson
Copy link
Member

Thank you for the kind words. The code for the tests lives here:

https://github.com/FAIRMetrics/Metrics/tree/master/MetricsEvaluatorCode/Ruby/metrictests

AFAIK, it is up-to-date with the deployed versions of the tests.

Comments/criticisms of the tests are VERY welcome! If you think a test is unfair (or unFAIR) please suggest an alternative approach, or build a new MI and new test. The objective is for this to be a community effort.

Cheers!

@rgiessmann
Copy link
Author

Hi Mark,

I see, I didn't recognize that organization of the file system. How open are you for unsolicited contributions? Shall I just fork and make a pull request afterwards?

I will think about a way to implement this back-linking to the tests when time allows, and am happy to contribute more to this community effort. We are actively applying the FAIRevaluator in the IMI project FAIRplus, I hope that there will be a lot of feedback coming up, indeed! :)

Best,
Robert

@markwilkinson
Copy link
Member

Yes, please fork and do a pull request.
We're hoping to find a formal arms-length mechanism for evaluating the tests (both our own, and community-sumitted tests), and "blessing" them... this is TBD...
Cheers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants