New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update /MediaReview codelist with new content from the fact-checker community #2844
Comments
This issue is being tagged as Stale due to inactivity. |
Let me update where this is up to. For how we got here see https://www.niemanlab.org/2020/01/is-this-video-missing-context-transformed-or-edited-this-effort-wants-to-standardize-how-we-categorize-visual-misinformation/ and https://firstdraftnews.org/latest/wapo-guide-to-manipulated-video/
Proposed Next StepsThese are some specific suggested additions/changes for schema.org, often with notes on the rough underlying requirements - comment welcome especially from implementors). The driver is basically "imagine using Schema.org JSON-LD to describe a media file with the codelist - what else is most likely going to be useful and feasible to say, so that the data can be used to enhance ClaimReview-like usecases and other anti-misinformation scenarios?".
There are clearly rough edges here and decisions to be made, but I feel the above sets a path to flesh out the missing details around MediaReview beyond the core codelist we published in the last release. I will work up some schema definitions in the direction outlined, for discussion and implementation feedback. |
Thanks @danbri. A few more reference documents detailing how we got to this place:
Joel Luther, Duke Reporters' Lab |
Perfect - thanks!
…On Thu, 29 Apr 2021 at 16:35, joelwluther ***@***.***> wrote:
For how we got here see
https://www.niemanlab.org/2020/01/is-this-video-missing-context-transformed-or-edited-this-effort-wants-to-standardize-how-we-categorize-visual-misinformation/
and https://firstdraftnews.org/latest/wapo-guide-to-manipulated-video/
Thanks @danbri <https://github.com/danbri>. A few more reference
documents detailing how we got to this place:
- In the fall of 2019, the Reporters’ Lab and partners began working
on adapting the Washington Post’s taxonomy
<https://www.washingtonpost.com/graphics/2019/politics/fact-checker/manipulated-video-guide/>
into a proposed Schema for fact-checks of manipulated images and videos.
The first draft of this effort is available with comments here
<https://docs.google.com/document/d/17Ko_LrnqET03RWkWiQAXEbFJFvpIbqjnEyX3X06HwFc/edit>
.
- Following an open feedback period, the Reporters’ Lab incorporated
suggestions into a second draft of the taxonomy
<https://docs.google.com/document/d/1UVNzBAefuprsZVRUl6hqzaBv_ttjZYmzlK_1PpGmlho/edit>.
This draft was emailed to all signatories of the International
Fact-Checking Network's Code of Principles on October 17, 2019, and was
made available for public comment.
- We incorporated suggestions from that document into a draft
Schema.org proposal
<https://docs.google.com/document/d/1WHnhwFWKraxOBQnV5p4eqAY_0pSDC-eeSrJ87Uxm5nw/edit>
and began to test MediaReview for a selection of fact-checks of images and
videos. Our internal testing helped refine the draft of the Schema
proposal, and shared an updated version
<https://docs.google.com/document/d/1jRbX2IesVQrWvKpehb8ntSMKe0D88bZp3nK8ZAjq6E4/edit>
with IFCN signatories on November 26. We also re-shared this draft, seeking
comment, in the IFCN Slack on December 4.
- On January 30, 2020, the Duke Reporters’ Lab, the International
Fact-Checking Network, and Google hosted a Fact-Checkers Community Meeting
at the offices of the Washington Post. 46 people, representing 21
fact-checking outlets and 15 countries, were in attendance. We presented
slides about MediaReview, asked fact-checkers to test the creation process
on their own, and again asked for feedback from those in attendance.
- The Reporters' Lab began a testing process with prominent
fact-checkers in the United States (FactCheck.org, PolitiFact, and the
Washington Post) in April 2020. We have publicly shared their test
MediaReview entries
<https://docs.google.com/spreadsheets/d/1vkAPGDtGU1GpfUPt9DgrtJL9QhsoiOjfNQCvpK00XRo/edit#gid=0>,
now totaling 300, throughout the testing process.
- On June 1, 2020, we wrote and circulated a document summarizing the
remaining development issues with MediaReview
<https://docs.google.com/document/d/1KXymmPI7RKjYwHTJ6xdLKoaJkLGG_Q3suIjUBOkCA6E/edit#heading=h.qh0h9fphy4g6>,
including new issues we had discovered through our first phase of testing.
We also proposed new Media Types for “image macro” and “audio,” and new
associated ratings, and circulated those in a document
<https://docs.google.com/document/d/18h6iZb0e18e1_T2qwCPKuqyPgYiOUrrki0nwuiafA_U/edit>
as well. We published links to both of these documents on the Reporters’
Lab site (We want your feedback on the MediaReview tagging system
<https://reporterslab.org/we-want-your-feedback-on-the-mediareview-tagging-system/>)
and published a short explainer detailing the basics of MediaReview (What
is MediaReview? <https://reporterslab.org/what-is-mediareview/>)
- We again presented on MediaReview at Global Fact 7 in June 2020,
<https://www.youtube.com/watch?v=ZNmnhmTpF3k&t=9s> detailing our
efforts thus far and again asking for feedback on our new proposed media
types and ratings and our Feedback and Discussion document. The YouTube
video of that session has been viewed over 500 times, by fact-checkers
around the globe, and dozens participated in the live chat.
- We hosted another session on MediaReview for IFCN signatories on
April 1, 2021, again seeking feedback and updating fact-checkers on our
plans to further test the Schema proposal.
Joel Luther, Duke Reporters' Lab
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2844 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABJSGI7AQYJ2TU6J3SPE6DTLF4DBANCNFSM4X2SA7GQ>
.
|
A couple of updates for collaboration and transparency I spoke last week with Leigh Dodds (@ldodds) who is working with Full Fact, and had been giving the ClaimReview and related schemas some careful attention. He mentioned a few points that touch on MediaReview design issues, so I'm recording them here. Leigh may share something more carefully written elsewhere. This is partial.
This business of whether itemReviewed is repeatable relates to MediaReview. With MediaReview we are trying to be clearer that the review is of a MediaReviewItem (which is a containing structure for various things e.g. versions of an image). If itemReviewed is not repeatable on ClaimReview, then this pushes content elsewhere: e.g. into having lots of ClaimReviews in the same page (in which case is there value in having FactCheckArticle as an Article type to capture that practice?). Mark mentioned that WaPo sometimes examines several claims in one go, but focusses on the lead/earliest. One way to deal with this within the current structure would be via multiple claims / appearances, but we don't seem to have a settled pattern for it yet. Also discussed ephemeral content, clubhouse-style, Fleets, etc., and agreed that for now the focus is naturally on items that get shared, rather than places where troubling things are said. Consequently, if MediaReview is relevant it's likely because someone has e.g. screen-captured content which would otherwise have been ephemeral. In which case originalMediaContextDescription would be a reasonable place to document that, since linking to the original doesn't make sense. (excuse scrappy notes, but I wanted to get something out before memory blurs further!) |
This issue is being nudged due to inactivity. |
Nearby: #3162 |
@danbri has there been any movements on this? As we at Duke (tagging @joelwluther) are beginning to implement tooling based on MediaReview some of these concerns are becoming more necessary to address. Specifically Please let us know about how best to proceed and what steps the community can assist with in making this happen. |
This is part of #2450 but just for the codelist piece.
From Duke (Nov 2020):
MediaReview Fields and Labels (as of November 2020)
Media Type: VIDEO
RATINGS
Original: No evidence the footage has been misleadingly altered or manipulated, though it may contain false or misleading claims.
Missing Context: Presenting unaltered video in an inaccurate manner that misrepresents the footage. For example, using incorrect dates or locations, altering the transcript or sharing brief clips from a longer video to mislead viewers. (A video rated “original” can also be missing context.)
Edited: The video has been edited or rearranged. This category applies to time edits, including editing multiple videos together to alter the story being told or editing out large portions from a video.
Transformed: Part or all of the video has been manipulated to transform the footage itself. This category includes using tools like the Adobe Suite to change the speed of the video, add or remove visual elements or dub audio. Deepfakes are also a subset of transformation.
Staged: A video that has been created using actors or similarly contrived.
Satire/Parody: A video that was created as political or humorous commentary and is presented in that context. (Reshares of satire/parody content that do not include relevant context are more likely to fall under the “missing context” rating.)
OTHER FIELDS
Video URL: Link to the page containing the video, such as an article or social media post
Original Media URL: Link to the original, non-manipulated version of the video (if available)
Original Media Context: A short sentence explaining the original context if media is used out of context
Timestamp of video edit (in HH:MM:SS format)
Ending timestamp of video edit, if applicable (in HH:MM:SS format)
[Fact-checker] Article URL
Media Type: IMAGE
RATINGS
Original: No evidence the image has been misleadingly altered or manipulated, though it may still contain false or misleading claims.
Missing Context: Presenting unaltered images in an inaccurate manner to misrepresent the image and mislead the viewer. For example, a common tactic is using an unaltered image but saying it came from a different time or place. (An image rated “original” can also be missing context.)
Cropped: Presenting a part of an image from a larger whole to mislead the viewer.
Transformed: Adding or deleting visual elements to give the image a different meaning with the intention to mislead.
Staged: An image that was created using actors or similarly contrived, such as a screenshot of a fake tweet.
Satire/Parody: An image that was created as political or humorous commentary and is presented in that context. (Reshares of satire/parody content that do not include relevant context are more likely to fall under the “missing context” rating.)
OTHER FIELDS
Image URL: Link to the page containing the image, such as an article or social media post
Original Media URL: Link to the original, non-manipulated version of the image (if available)
Original Media Context: A short sentence explaining the original context if media is used out of context
[Fact-checker] Article URL
Media Type: IMAGE WITH OVERLAID/EMBEDDED TEXT
RATINGS
Original: No evidence the image has been misleadingly altered or manipulated, though it may still contain false or misleading claims.
Missing Context: An unaltered image presented in an inaccurate manner to misrepresent the image and mislead the viewer. For example, a common tactic is using an unaltered image but saying it came from a different time or place. (An “original” image with inaccurate text would generally fall in this category.)
Cropped: Presenting a part of an image from a larger whole to mislead the viewer.
Transformed: Adding or deleting visual elements to give the image a different meaning with the intention to mislead.
Staged: An image that was created using actors or similarly contrived, such as a screenshot of a fake tweet.
Satire/Parody: An image that was created as political or humorous commentary and is presented in that context. (Reshares of satire/parody content that do not include relevant context are more likely to fall under the “missing context” rating.)
OTHER FIELDS
Image With Overlaid/Embedded Text URL: Link to the page containing the image with overlaid/embedded text, such as an article or social media post
Original Media URL: Link to the original, non-manipulated version of the image with overlaid/embedded text (if available)
Original Media Context: A short sentence explaining the original context if media is used out of context
[Fact-checker] Article URL
Media Type: AUDIO
RATINGS
Original: No evidence the audio has been misleadingly altered or manipulated, though it may contain false or misleading claims.
Missing Context: Unaltered audio presented in an inaccurate manner that misrepresents it. For example, using incorrect dates or locations, or sharing brief clips from a longer recording to mislead viewers. (Audio rated “original” can also be missing context.)
Edited: The audio has been edited or rearranged. This category applies to time edits, including editing multiple audio clips together to alter the story being told or editing out large portions from the recording.
Transformed: Part or all of the audio has been manipulated to alter the words or sounds, or the audio has been synthetically generated, such as to create a sound-alike voice.
Staged: Audio that has been created using actors or similarly contrived.
Satire/Parody: Audio that was created as political or humorous commentary and is presented in that context. (Reshares of satire/parody content that do not include relevant context are more likely to fall under the “missing context” rating.)
OTHER FIELDS
Audio URL: Link to the page containing the audio, such as an article or social media post
Original Media URL: Link to the original, non-manipulated version of the audio (if available)
Original Media Context: A short sentence explaining the original context if media is used out of context
Timestamp of audio edit (in HH:MM:SS format)
Ending timestamp of audio edit, if applicable (in HH:MM:SS format)
[Fact-checker] Article URL
The text was updated successfully, but these errors were encountered: