New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
On Accessibility #196
Comments
Hi! I am building support for 'navigation trees' into makepad which would be the backbone to add accessibility on. Currently i use it for keyboard nav of the ironfish UI. However the idea is that it should be able to spawn a shadow-dom-tree for ARIA or NSView tree on macos for traversal using accessibility tools. I do still think that AI based image decomposition/structure extract tooling is going to sweep the stage soon enough tho. |
Hey, Rustybuzz support was just merged in, so the unicode support is already taking its first steps. Albeit having some performance issues. We are working on ticking the boxes. I also do have an accesibilty backbone in the system we just have to work out the details on all the platforms. This is a long haul project, if it doesnt fit your requirements right now then you shouldn't use it, which seems very likely at this point in time. Because it is just barely starting to fit even our own requirements to build an IDE and applications with. We are building out in the open, so you can see everything we are doing. However that does not mean that we pretend that our stack is complete yet. And yea i think AI will sweep accessibility soon enough, why? Have you seen how fast AI is moving in understanding our world already? Image segmentation, image to text, understanding intent, etc. I'm building the infra to integrate with the system screen readers (like on apple and web) but otherwise i don't really believe in the standardisation and categorisation insanity that is ARIA on web for instance. Have you seen the category lists on that thing, nobody uses a list that long. 5-10 types tops. And above that it is even the wrong API model for a UI stack, i need a query model not a document generator. Btw im not desinterested, i fixed the readme link immediately. However we are not yet focussed on scaling use of the application framework. It is like launching visual basic without an IDE/editor. Once we are there we'll ramp up communication/documentation efforts to fit. But from a central onboarding application. |
You seem somehow convinced i wont do accessiblity the old way. We are going to do that. I just think the model is aging and will be replaced soon atleast for people on fast compute. |
What kind of AI do you think is going to do this?
What are the input(s) and output of this hypothetical AI?
Since you are someone who talks about memory size and dependencies a lot, how many GB will it take and how much accuracy and latency do you think such an AI will have? Do you expect this to be implemented magically on all platforms "soon enough"?
Accessibility occurs at a pretty low level and has very complex interactions with different components. Further it is a many-to-many problem, which is why it requires app or content authors to do the categorization and the GUI platforms to surface the categories. What indication is there in the literature that AI is going to "sweep the stage" of UI accessibility?
|
My guess is apple will be the first to offer image based accessibility built into the OS instead of looking at DOM trees' often very messy structure representation of what you see on a website. And this accessibility might even run on something like the apple AR headset to be 'generally' applicable to the entire world and not just a screen. Reading a microwave or roadsign as easily as a UI. |
The primary problem of accessibilty as i understand it is having a tree structure of a UI available to 'traverse' via screenreader and then doing actions on a selected node. click/focus or enter text, roughly. |
However the rendering-structure of an application vs how you would efficiently structure accesibility are also not the same. |
Anyway i think you are right btw, i too easily flipped the accessibility question to AI. We'll put more work into integrating current accessibility tech into the system. |
Btw the latency on apple image recognition is extremely low. Anyhow you don't need to convince me on the important of accessibility tech for a UI framework. And i'm sorry if my response flipped everyone into thinking 'i dont care'. I do, just even from a humane point of view i do. And i'll try to allocate more resources to it soon. I'm just trying to juggle my priorities so we can get this insane project to some kind of value as soon as we can. |
See as an engineer my mind goes: "low" under what conditions? We're talking 10-20 of MB per image at a 60-120fps. So that's how much other memory needs to be possibly evicted from the caches. You've also not mentioned what the power draw would be. This cherry picking of traits really doesn't do much for me, especially when we're talking about a toolkit that has to be used under various architectures (even just on apple systems - take the watch for example.) Really lost me on this one my friend. Guess I'm quite influenced by the John Carmack school of analytical software engineering. Agree on the rest of your message. Good luck. BTW is there a guide for writing an app outside of the makepad repo and getting it to compile and run for webassembly? I haven't tried yet but am curious about any possible footguns doing this. |
Ahahaha - every user of Harfbuzz runs into this at some point. The solution is to cache the shapes at the word level. Even Behdad, the creator of Harfbuzz, has acknowledged this. It works, and how! |
Yea i just merged it in because performance really doesn't matter at this point as nobody relies on the stack yet, merge in, fix the paper cuts and move on. Caching the word shapes was indeed upnext. |
I meant anything special to write an app using makepad-widgets, but keeping the code in a separate repo. I asked because the webserver looks like it may have an opinion about where the compiled code actually lives. |
Better if I open a separate issue - don't want to pollute OP's issue. |
Ah yea, yea we haven't been pushing to crates.io much because we havent really moved to external-use stage yet. Right now pulling the repo into a subdirectory of your application and setting the include paths would be the way to go. |
As for WASM builds we really need to complete cargo-makepad extension for wasm. Right now its a shellscript. |
So to answer your question how to build wasm 'outside of our repo' its a bit janky yea for sure. |
The problem with WASM builds is that it needs huge custom commandlines, uses nightly, and often needs to compile the standard library. As you can see in the shellscript |
I wouldn't worry about the OP (me), who is both fascinated and delighted by the exchange, and seeing people with much more expertise entering possible collaborations. Thank you both 馃 |
Hi there 馃憢 I watched Rik Arends presentation at RustNL. Fabulous project (I immediately posted on Fediverse in excitement). Imho together with "local-first" this shows a paradigm beyond Web 2.0 where dynamic webapps leave the browser for what it was meant to be: serving hypermedia. And break the stranglehold of Google on the web.
BUT..
In the presentation Rik mentioned that for Accessibility most likely AI would soon do the heavy lifting for us. Please, don't let a11y fall down the road-side like that. It should be a first-class citizen of any GUI framework, imho. On the Fediverse there are many visually impaired people and I notice their huge frustration and continuous advocacy to see their needs addressed in so many softwares.
In my toot there's a link to this HN submission where Ian Hickson, long-time html spec editor (and ironically at Google Flutter) defends a vision "Towards a modern web stack" and listing ARIA as a standard to support besides WebGL and WebAssembly. See his Google Doc.
PS. Heads up to this HN thread mentioning Makepad, where commenters also express similar concerns.
The text was updated successfully, but these errors were encountered: