New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Formal objection: Future accessibility and EME #376
Comments
We looked into accessibility issues at |
Thank you for this, Philippe.
EFF raised accessibility issues repeatedly during the covenant process,
published an article enumerating use cases that were note covered by the
W3C's accessibility process over a year ago
(https://www.eff.org/deeplinks/2016/03/interoperability-and-w3c-defending-future-present)
and filed a still outstanding issue
(#376).
I confess to being frustrated by this discussion, because advocates for
DRM have repeatedly raised the EME accessibility process, and EFF has
repeatedly pointed out the cases it did not support, and there has not
been any reply -- this is also what happened when I raised these issues
on this list and in the AC.
I would love to see an actual reply to these issues we raised. To
summarize them:
* every other W3C standard can be adapted to accommodate disabilities
not countenanced in the existing W3C process; thus that process
represents a floor on accessibility, not the ceiling
* EME is unique -- thanks to the existence of the world's
anticircumvention laws and the lack of a protective covenant -- in
giving firms a veto over this sort of adaptation. Because of this, the
W3C accessibility tests represent the *maximum* accessibility for EME,
not the *minimum*.
* It took us ten minutes to come up with four use-cases that the
existing W3C process doesn't cover:
* Lookahead to detect strobe effects and prevent potential seizures
* Realtime gamut shifts to accommodate idiosyncratic color-blindness
* Using machine-learning systems to add subtitles (that is, ingesting
them into a system faster than realtime and transforming them for
maximum accuracy in transcription)
* Using machine-learning systems to add descriptive tracks (see above)
The fact that a human can watch a video and annotate it is grossly
insufficient and decidedly not future-proof. UC Berkeley just took down
20,000 hours of high-quality instructional video because they couldn't
comply with an order to subtitle it -- for such a task to be tractable,
it must be possible to process video in batches, at speeds much faster
than realtime (UCB's corpus is just a crumb of the total potential video
that could benefit from such treatment).
Moreover, this is hardly a comprehensive list, nor is such a list
possible. There's a reason that the W3C's process doesn't attempt to
ensure that its standards come with every possible accessibility
accommodation baked in: such a process would be onerous and would have
no terminus, because no one can enumerate every possible idiosyncrasy of
disability, now and in the future. That's why we rely on people making
their own accommodations and do all we can to get out of their way.
EME does not permit this, for disabled users in every country in the
world except Israel, and, pending presidential signature, Portugal.
Thus we ask the W3C to adopt a covenant to protect people who adapt EME
for use by disabled people, to put EME on a level footing with all other
W3C work-product.
Thank you,
Cory
…On 04/12/2017 01:09 PM, Philippe Le Hegaret wrote:
We looked into accessibility issues at
https://www.w3.org/2017/03/eme-accessibility.html
in particular in the possibility that there might be barriers to
accessing accessibility information such as captions, or that there
might be barriers to researching innovative approaches to
accessibility-related modifications of the video stream. We have not
found any such barriers in our investigation. Others have looked into
this as well, such as
https://lists.w3.org/Archives/Public/w3c-wai-gl/2017AprJun/0048.html
Additionally, note that the architecture of EME does not prevent
associating subtitles and captions independent of any rightsholders. one
can add a track element to an existing video element.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#376 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ASadlntLYWqs6wCPeTQk-hhJMzMT444oks5rvS90gaJpZM4MnXXk>.
|
EFF has repeatedly raised the issue of accessibility and EME, on-list, in calls and during the earlier covenant process. We reiterate these concerns, first published here (https://www.eff.org/deeplinks/2016/03/interoperability-and-w3c-defending-future-present) as a formal objection:
Media companies invest in accessible versions of their products — sometimes they're legally obliged to provide them. But people with disabilities have diverse access needs, and statutory requirements or centrally-provided dispensations barely cover the possible ways that content, including video, could be made available to a wider audience. That's one of the reasons why the W3C's other work on media standardization is so exciting. HTML5's unencrypted media extensions not only provide built-in accessibility features, they also offer the possibility of third-party programs that can transform, re-interpret, or mix original content in order to make it accessible to an audience that can't accept the default presentation methods.
To give a few examples of what the future of HTML accessibility might include:
YouTube attempts to create closed captions on the fly using speech recognition. It's not always perfect, but it's getting better every day. A smart web browser displaying video could hand over audio to be recognised locally, creating captioning for content that doesn't have it. Add to that auto-translate, and your movie gets a global audience, unlimited by language barriers.
While we wait for better algorithms to improve captioning, many take advantage large volunteer subbing communities that create subtitling and captioning independent of any rightsholders. Synchronizing such content with the original video is sometimes an exercise in frustration for the users of these subtitles. In the future, subbers could create webpages with javascript that seeks for audio and video cues in existing media to correctly synchronize their unofficial subtitles on the fly (as dubbing companies like RiffTrax have had to do with their own synchronization workarounds).
Security researcher Dan Kaminsky has developed a method for transforming the color space of video, in real time, so that red-green colorblind viewers can see images with real reds and greens. The DanKam could be applied to HTML5 video to let the color blind see a fuller range of color.
One in four thousand people rely on video passing "the Harding test," a method for determining whether movies contain flashing imagery that may cause harm to those suffering from photosensitive epilepsy. But the Harding test doesn't catch footage for every person with epilepsy, and not every video source is checked against it. In the future, we can envisage a website that could pro-actively run flash and pattern identification on incoming video and warn users or skip dangerous content.
The text was updated successfully, but these errors were encountered: