(Metaresearch Evaluation Repository to Identify Trustworthy Science)
We're building a tool to help people find ratings that researchers have given to articles so that we can improve their visibility, support meta-research and foster innovation in the scholarly evaluation space. Watch this quick introduction to learn more.
Thanks for dropping by to our open source project! This project is being led by Cooper Smout and Dawn Holford, with support from a passionate team of volunteer contributors (see Credits in the Readme). We hope that you will consider contributing too!
Please read our Code of Conduct before contributing and see our Readme and Roadmap to find out more about the project.
Most of the project documents (e.g., To-do board, document drafts, team details) are stored in our Notion workspace. Please contact the project leads to request edit access.
Slack (#merits channel in the eLife Innovation Sprint 2021 workspace)
Adding ratings to the Airtable database:
- At present only project leads can edit the Airtable (this will change as the project evolves), so please get in touch if interested
Code changes:
- Clone the repository, make required changes, create a pull request
- Comment on our Readme.md — does it make sense?
- Any issues marked 'Good first issue'
- Please stay on the lookout for any potential issue that might cause problems for the project, e.g., problems in code, content omissions, or any issues with the functionality or design of your project
- Submit bugs by creating a new issue in Github or by contacting the project lead/s
- Thanks for your interest in contributing!
- Beyond Github, we'll be using the All Contributors bot to recognise all kinds of contributions (to be installed)
- Contributors will also be invited to co-author any papers arising from this project
Please contact either of the project leads, @coopersmout or @dlholf