We want to help researchers get better at predicting how their work will be interpreted and presented in public spaces.
As a society we rely on scientifically derived knowledge to make decisions about almost every aspect of our lives including many important matters of life or death. So it’s important that the public in general are able to accurately interpret certain scientific results. Researchers value facts and spend a lot of time and effort making sure that when they publish their work they accurately describe the results they’ve obtained and how they obtained them. They typically communicate their results using scientific language that’s targeted largely to other researchers in their field, however most people are not researchers and most researchers will likely work in an unrelated field. In truth, almost everybody significantly relies on interpretations of scientific results that are passed along by others including friends, colleagues, teachers and the media. If the research gets misrepresented it can have dire consequences.
To make their work more accessible some publications and institutions encourage or require a media release that summarises the work in "Human terms" to be submitted alongside the research. Unfortunately there is currently no efficient way for researchers to easily predict how that release could potentially be presented and (mis)interpreted in public spaces and the media (both traditional and social).
We recognised that an important part of the feedback loop between the general public and researchers was missing and so we've devised a very simple way to both fix the issue and to help us learn more about this problem space. It works like this:
- Step 1: Researchers create a media release and some short quiz-like questions that test how well readers understand the outcomes of the work.
- Step 2: We send the release and the quiz questions to volunteers
- Step 3: We collate the results of the quiz and present it back to the researcher
- Step 4: The researcher can make edits and publish the release or they can return to step 1
We've built a very simple version of the platform using a few Google apps and some stickytape (It's a hackathon people!) and now we need researchers and reviewers to give it a try and hopefully give us some useful feedback on the system itself. If you're interested in helping out by reading a little text and doing a short quiz or if you're a researcher who's going to publish shortly and would like some feedback on your media release then please visit the sign up page and we'll be in contact with you shortly.
Any written docs or images made as part of this project are distributed under the CC Attribution 4.0 International License. We're currently putting together code using Google App scripts and we will place it in the git repository under the GPL V3.0 License.
We'd love to hear why. Tweet your questions to @healthhackau using the tags: #healthhackau #abstract.
For helpers, we collect your personal information so that we can contact you when we have media releases ready for review. For researchers, we collect your personal information so that we can send you the results of the review. We may contact researchers or helpers to collect feedback about this project. We will not use any personal information for any other purposes. We will not disclose any personal information to any other person or organisation and we will take reasonable steps to ensure that all information we collect and use is accurate, complete, up to date and securely stored.
Right now we're trying to learn about how this platform can help and how we can make it better. However, we are going to end up with a data set that contains examples of research-oriented text that has been manually curated to improve understanding among the general public. This data might be of interest to people working in NLP. If that sounds interesting then please introduce yourself to us on Twitter.
This project was pitched as part of HealthHack Online 2020 and built by:
Team Smart Aunties
- Andy
- Arshdeep Singh
- Gergo Szabo
- Hao Gao
- Lachlan
- Michael Imelfort
- Peggy Wei