developersWG 03.27.2017

Charles LaPierre edited this page Apr 27, 2017 · 5 revisions

Developers WG Meeting 3/27/2017

Attendees

Present: Doug, Tara Courchaine, Glinda Hill, Derek Riemer, Marisa DeMeglio, John Gardner, Sina Bahram, Neil Soiffer, Charles LaPierre, Sue-Ann Ma, Jessie Greenberg, Amaya Webster, Bruce Walker, Kyle Keane

Discussion

Introductions new members

Jesse Greenberg from University of Colorado, working on PhET simulations. Software developer at PhET building physics and science simulations for k12 students and beyond. Very interested in accessibility and making highly interactive content accessible through keyboard navigation and descriptions, but also exploring sonification. 

Marisa DeMeglio from DAISY is a software developer who has been working on reading systems and conversion tools.

Brief Recap of our Math in EPUB development 

Earlier this month Sina, Jason, Neil, George, Avneesh and Charles got together to discuss how to put MathML into an EPUB, They came up with two approaches one with and one without Java Script.  Without JS will default to an Image (PNG or SVG [still need to work out any potential issues with SVG]).  The other is with JavaScript and using ARIA to have a solution that could either make the Image and AltText the primary and hide the MathML in those systems which you cannot do MathML, or reverse this and have the MathML the primary and hide the image to AT and make the AltText “” so that it is not double spoken on reading systems which were sometimes reading both.

Derek: Will that scale to Daisy? Sina: Daisy3 has MathML incorporated, but there is a little bit of a difference there because basically EPUB is also consumed sometimes on systems that are pretty legacy. So the major thrust of this work isn’t necessarily a best in class solution for MathML as it is to not have a bad experience with HTML5 or JavaScript or mathml isn’t supported The major insight of this is that instead of leading with the mathml we lead with the image and have the mathml as the fall back. There is a matric that Charles can send the link to, what we did was lay out all those possibilities, with aria, without java script, with both, etc. and went through and annotated in each of those possibilities what the experience for the user would be.

Neil: My understanding is that there are very few Daisy 3 readers. And there was one that supported MathML, but I can’t recall which one. The whole push was to move away from Daisy and move to EPUB, so I think people have given up doing development there.

Brief Recap of CSUN (sessions, Office hours, meetings etc.)  Cool things we saw etc.…

CSUN was a big success this year, and all of our sessions were very well attended with a lot of great questions.  The DIAGRAM office hours was extremely well attended and thanks to a large suite this year we were able to accommodate the ever growing number of folks who dropped by to say hi and wanted to learn more about what we do and how they can get involved.  

Charles: I know I was really excited to see a demo from SAS on data visualization using deep Data SVG and sonification using actual musical notes instead of just a computer generated sound wave.

Neil: This is Ed Summers work? Do you know what’s changed?

Charles: He’s also doing beyond normal graphs, he went in to 3d heat maps and using chords to designate that type of z access. And then also instead of just the sound wave to have physical notes. They did a lot of research and came up with that. Doug was with us as well.

Sina: I’ve been fascinated by that space for five/ten years now. Do you happen to know if SAS did any user studies that showed efficacy anything past something as simple as a parabola, graph, all the simple things I think have been done by Bruce Walker’s team. I think they did take the sandbox of the sonification from George Tech and that’s what they incorporated, but was there any research presented on the efficacy of these techniques? And I’m sounding more cynical then I want to.

Charles: They did a lot of research and reached out to a lot of people and work around quickly being able to navigate the data and detect auditory differences. So they can have quicker access to immense amounts of data and be able to drill down. Other than that I don’t know how much they’ve done with the research, but they’d say that the work is evolving based on the research they found.

Bruce: I cannot comment, I don’t know what specifically Ed and his team have done in terms of usability testing, but we’ve continued to do evaluation in our lab and have shown 5,6,7 variable streams can be monitored with plenty of user data. So that’s a partial answer to Sina’s question.

Sina: Bruce, if it’s as simple as going to your publications page, would you mind sending out specific papers to the group? Bruce: Sure.

John: My rule of thumb has always been you can understand a graph playing and hear differences with lots of variables, but you can’t tell what’s going on what you can tell is something is different. Is that correct?

Bruce: That’s correct, but also had plenty of success with people understanding and tracking with what is going on with 3, 4, 5 variables at a time.

Charles: Was there anything else at CSUN someone saw and want to bring attention to for this group?

Derek: There was a chemistry diagrams and physics diagrams, and circuit diagrams in SVG, Volker and john have been doing a lot of work on that. There was a session Friday morning.

Sina: The other thing I’d give a shout out to was the affordable braille display from APH. When you combine that with other reading systems, I was talking to Peter Korm form Amazon, you can conceive of an accessible speech based reading solution for around $500. I’m interested to see where that goes. The APH display is 18 cells I think. The device is called Orbit reader 20, so oh, it’s 20. We also talked about drag and drop and probably will later on in this call, so to that end something somewhat related, is drawing in general, tactile input, orbit and APH were demoing their Graffiti product, which is a full page tactile display, that can display braille or graphs and be animated by refreshing potions of the display. You can actually take your finger and draw by touching the device and feeling the dots swell up under your finger, and you can raise your finger if you wish to effect the elevation of the dots, so it’s possible to tactilely begin drawing. There is a lot of interface work that needs to go into this. But the basics are there and the technology.

John: I was surprised to hear you can show braille. I didn’t think you could do that.

Sina: When I was talking with APH they were saying the intended use isn’t braille, but you can shoehorn braille on to it. I can get a definitive answer on that.

Charles: What is the size of the page?

Sina: It was a demo unit, the actual size will be bigger than what they were showing. I want to say it was like 40 dots by 40 dots, I can get you a more precise answer. It was a reasonable surface, we’re not talking a thin rectangle that is only half an inch high. The production unit will I’d say bigger than an 8.2 by 11 sheet of paper. Even if you can’t do braille on it, and John raises a good point with the four millimeter spacing, but the graphic properties were very interesting.

Neil: It’s 40 by 60 dots

Recap of the Pre-CSUN accessibility code sprint

Charles: We had a great time at the code sprint, we had around 25 developers. It was one full day on Tuesday. We had six different groups we split into and I’m going to ask each one to give a short recap of what was accomplished that day.

Data Visualization using Hover Box on SVG Map

Doug: It wasn’t just data visualization. It was specifically what’s called a hover box or tool tip, but it’s an info box that pops up when you mouse over a state or a bar in a bar chart, and this is a well-known accessibility issue for hover states aren’t useable by keyboard, or on mobile devices so I came with a working hover box that works as expected with a mouse and I made it keyboard accessible with the help of a lot of other people. And we made is accessible to people with low vision, and started to make it accessible on mobiles. It worked with Jaws AND VOICEOVER and you could tab through the different options, and using aria live regions it would add information to whatever state you were on. We have work to do to make it work on mobiles correctly, but hopefully it’s a drop in solution that someone can use to supplement their existing hover box.

Charles: That is something we would like to grab the code for and get it into our repo once it’s in a nice polished state.

Doug: Wonderful

Sina: Is there any low hanging fruit on JQuery UI or bootstrap that this could augment as a proof of concept.

Doug: I don’t know, I don’t use those libraries, but once I finalize the code I could reach out to those folks and ask what the state is there. Another part of this is that it is meant to be generic, so this will extract data and display it. Maybe by leveraging JQuery or other library like that we can get those conventions worked out better.

Sina: The other thing that comes to mind is D3 and C3 as the graphing component.

Drag and Drop

Sina: So we worked on a team of the functionality of drag and drop. There were several categories of drag and drop discussed in terms of their goal and the goal of using drag and drop. One is sorting. So if you wish to sort a list of ten items, one potential approach is to visually manifest it as a drag and drop where you drag from a source list to a destination, or doing it in place moving something from the bottom to the top. There was this idea that what is the point of drag and drop in that example. If you can see you can see the items in front of you, so how do you convey that equal advantage for a blind/low vision user as well? What we did was come up with an example with java script and HTML and take a list box, but have it be aware of keyboard input as well. And we came up with simple key mapping like USD, and what we did was assign those to move up and down, so you could do in place sorting. And we can announce them with audio feedback in real time with the addition of letting you know where you are, like item five now before item four. So it gives you local context. If you look in the code repository there’s a little demo that generates ten random numbers and asks you to sort them and you can do so with a mouse and a keyboard. The other example is one I’ve touched on, the taking from one list and populating another. You can look at the code repository, and using key strokes and html it allows for rapid navigation. We talked about a few other cases, but didn’t have time to come up with solutions, those were not only drag and drop, but the journey matters as well. So you’re taking something form a source to a destination going over an intermediate area and how do you deal with that situation.

Charles: Any questions for Sina on drag and drop? One thing I’d like to mention is that we will continue that work in the drag and drop sub-committee, so thanks Sina for that.

Unicode / Nemeth Braille

CharlesNext I’ll give an update on the Nemeth UEB braille table that Murry Sargent was working on. So Murry from Microsoft created Unicode braille table basically was trying to map nemeth to UEB.

Neil: He was trying to find out what nemeth code correspond to Unicode codes and where are there gaps that could be filled in.

Charles: he was working on the table during the code sprint along with Neil a bit and Volker a bit, a lot of people moved around during the event, Volker took the code and used it for MathJax

Neil: I’ve been in touch with the UEB folks about mapping for Unicode to UEB since he was working on this table, it would be useful for software developers to get it because documentation is lacking, so the table will have both nemeth and UEB and is useful also as a comparison about where it differs, and UEB will probably add things that are in nemeth and they hadn’t thought about yet.

Glinda: This is a tool that would be very helpful for university teachers. Would this be available and who should I talk to?

Neil: Yes, it will be available for everyone. Murry is a part of the technical committee and he’s planning to publish and annex that would be this table and in talking with people from UEB side, they are interested in getting it out there too because they realized they don’t have a good set of resources.

Glinda: Are you talking about the North American braille group?

Neil: yeah, they’re either in the US or Canada. Murry’s goal is to have it published by next spring, he’s pretty busy this spring and summer because he’s retiring and he needs to finish stuff off. But he’s planning to work on this once he’s retired. And it will be awhile because the UEB folks need to provide details on how to get extra characters and think about the issues that Murry found and things I’ve pointed out about how to deal with certain characters.

John: This is just the Characters?

Neil: Yes, just the characters so if you want to look up the double equals sign, what does that correspond to in Nemeth and in UEB. I know as a software developer this would be very useful because it would be provided in a form that you could extract and get your own table out of.

Charles: Some of this is even used in MathML for representing certain characters. Like why is it saying hyphen when it’s a minus sign, and it’s because they have the Unicode for a hyphen not a minus.

Neil: Mathplayer looks at these kind of things, and tries to determine what the common mistakes people make are. And there are other things about using an apostrophe for a prime, which is a common mistake

Charles: So there are exceptions we should consider

Neil: As MathJax matures more they will realize that the stuff you get out of people isn’t always perfect

John: If you put a minus sign into mathtype it crashes.

Sina: An update on the other side of the conversation, I’ve had conversations with Volker and Jamie Teh. I think it’s on the right path, looking at making sure that the active note is trackable on the braille display so if a student is on an integral symbol and hit the routing key then they will be taken to the right thing and not be a situation where the Unicode of the braille is embedded in the document. So revolving some of the tech issues aren’t glamorous, but critical and is going on in parallel.

Neil: I’m confused, what’s the issue with the Unicode character and integral sign?

Sina: remember when Murry has the Unicode showing on the screen. That’s not what you want for a screen reader to read or display because that’s not how you want to present the information to be translated to the braille display. So the idea is to provide it in a single form so that you have a single source of representation not multiple passages. So trying to resolve that.

Derek: How are we going to do this? Can we modify math player anymore?

Sina: That’s MathJax

Neil: there’s a new version of math player that will come out after I get the latest Liblouie incorporated, at least they said that would happen.

Accessibility Live Updates For Simulations (Priority Utterance Queue)

Charles: Jessie can you give an update?

Jesse: We were working at a table that was interested in accessibility alerts through aria live elements in an n interactive context. We got interested in the fact that there is no way to specify the order of alerts, or specify when there are a lot of alerts you want to provide at one time. So you have an interactive with a capacitor that is charged and at that time many things can happen like the charge distribution can change, currents can flow, the bulb could turn on and start diming, etc. and it would be great to have a way to specify to user agents that we want to say all those things and in order, so we worked on a queuing system that uses vanilla JavaScript to update aria live elements with that content in order. So it’s not a revolutionary concept, it’s just a queue that updates text context at the right time. And it can work really nicely with interactive content. It has some intelligence so it takes things out of the queue based on type. So if something is animating across the screen it generates 150 alerts as it moves position, but it will only show the last, so the user isn’t overwhelmed with similar or stale information. And the GitHub repository for the code sprint there is a toy example on how it can be used.

Charles: so will we continue the utterance queue and make it more generic so it can be included in the diagram code repository?

Jesse: sure, what we were working on was tested in a PhET simulation but it’s all in its own context, so it can totally be taken and placed anywhere that would be helpful.

Derek: So one thing I noticed might be a problem, is that you don’t know when the screen reader has spoken it so you don’t know if you should tell it to cancel or not. Would you be interested in working on specify speechml or similar, so you could have a maker based on the screen reader having read it already. Jesse: that is a critical flaw, there is no way to know when a screen reader has started or finished. At the moment it tries to apply things to the queue with a little bit of a delay which is kind of crude, so if we could take a look at that it would be invaluable.

Charles: if you add the same item to the queue does it tell you that it’s already third in the list and replace with the new update?

Jesse: it won’t replace what’s already there, it will remove it. If you add something to the list of the same type, both alerts are about the same type, it will remove what was previously on the list because it will assume it’s stale information. You can give it a different flag so it won’t remove it.

Personalization of Websites

Charles: do you want to give an update on the personalization?

Marisa: I’ll mention a bit about Lisa’s project which is a personalization project with COGA, her project which she introduced me to, it adapts a website interface based on properties that the COGA specifications talk about. The website has properties hardcoded in to it and they combine with a user profile, so the properties identify different areas of the site, and present these types of things in this manner. So she thought of a plugin that takes the properties and combines it with a user profile and it might simplify the site or it might make terminology more consistent. A user might always want to see something called home, or front page, or main, and it would change it to that so the user would feel more familiar with it. So what I worked on was a chrome extension that extends this model but making it possible to ingest properties into a site via JavaScript injections, so you can take the code properties and add them on the fly through an extension rather than having them be hard coded. It makes it possible for the site to have a code interface without the original web developers needing to know about it. So that’s about it.

Charles: Could you add in CSS styling potentially for different elements?

Marisa: I think her project covers that. Her project deals more with the scope of the users experience based on what the properties are.

Charles: we had a couple of UI and UX experts which helped the different groups consider different options on how to make an accessible user interface. The last project we worked on was Volker working on MathJax adding in partial support for Braille.

Neil: There was the one you and I were on about the EPUB Standards. Sina: The latest on Volker is the conversation with Jamie about embedding the braille which I talked about earlier.

Math in EPUB (Neil Soiffer Lead)

We added an PNG image of math with Alttext from mathML cloud then added the mathML after the image, and used CSS to hide the mathML offscreen. This seemed to work when we extract the xhtml out from the EPUB and loaded this in a web browser, but the EPUB in various readers usually spoke both the Alt Text and then Spoke the MathML. We did find Lucifox a FireFox plugin that could read EPUB books in FireFox, this would highlight the image and speak the alt text (even though it was supposedly hidden) and then if you move to the next element would visually put a small box on the top right of the image where it would speak the mathML and then you could enter in the mathML if you wanted to navigate it.

Charles: The one Neil and I worked on was showing MathML in an EPUB and some of the inconsistencies with double speaking of alt text. And we found a plugin, Lucifox, which can open up EPUBs and has some MathML support.

Neil: There are others like Calibre that does MathML support. So there are some, just not that many.

Charles: Lucifox showed the image and spoke the alt text and if you navigated to the next element it put a box around it visually and then started speaking the mathml and you could then dive into it and be able to navigate it with your screen reader, so that was interesting, but we want to come up with a better solution. So that’s the recap on what we did at our code sprint at CSUN.

So in our last six minutes I want to discuss idea that came out of the code sprint that we didn’t get to. Some of the things we didn’t work one were:

  • Parallel dom support for Macmillan simulations – so having two doms, one for regular and one for AT
  • Natural language interface – PhET simulation for john Travolta and static balloon sims
  • Graphics manipulation and sonification
  • Chemistry equations
  • Electronic circuits
  • Physics
  • Atom builders
  • Generate accessible SVGs
  • Stem interactives
  • Augmented reality work
  • Personalized learning

So those were some of the areas that people were interested in which we didn’t work on. IS there anything else that anyone can remember that was a hot topic but we didn’t get to?

Doug: I wanted to note regarding the accessibility Dom, I noticed Chrome has announced, sorry google has announced an intent to implement the new accessibility object model. So this would be an API to be able to dynamically modify the accessibility tree using java script so it’s not exactly what you were talking about, but very relevant. So I can send out a link to that.

Sina: So that’s the same thing, it’s as scary as it sounds because you get rights access to the accessibility tree. So we’ve seen what happens with Aria gone array, and this has more potential to do some incredible work, but also some interesting and not so great results as well. The other one is the SVG two, so taking a look at SVG two in terms of what’s available there for labeling components. I just want to give a standards based shut out to that.

Neil: On the accessibility API, I was looking at the chrome page there are mixed public signals from Firefox, none from edge…

Sina: this is brand new, Doug do you have more information?

Doug: Frankly I wouldn’t expect, knowing how standards work, until somebody shifts someone you’re going to have mixed feedback.

Neil: My concern is chrome has announced many standards and very few have gone anywhere.

Doug: I can tell you form being in the session on this at TPAC that there was a lot of interest in this by a lot of different people both in accessibility and in the API SPACE AND James Craig at apple is very committed to this so if google and apple can work out something I think you’ll pretty quickly see Edge fall into play. The reason you get this mixed signal is that any time you introduce something new to a browser, the browser vendors say it’s more work, and that’s the fundamental underlying block, but if Google and Apple do this then you’ll quickly see Microsoft fall into line because there’s a big focus on accessibility there, and once they do Firefox will have no choice but to do so.