Skip to content

odere-pro/accessibility-booster

Repository files navigation

accessibility-booster

AI-powered accessibility layer for any webpage — six toggles, zero redesign required.

Live Demo License: MIT Claude API WCAG AAA

Live demo →


The problem

94.8% of the top one million websites fail basic accessibility standards — an average of 51 errors per page, locking out 1.3 billion people who live with a disability.

Retrofitting is expensive. Redesigns take quarters. Most teams ship inaccessible experiences not from indifference but because the tooling makes accessibility a design-phase concern, not a runtime one.


The approach

Multimodal AI models treat format as a parameter, not a constraint. accessibility-booster injects six AI-powered transforms at inference time — no redesign, no CMS changes, no new build pipeline.


Six features

Toggle What it fixes WCAG criterion Tech
Alt text Images with alt="" get AI-generated contextual descriptions 1.1.1 Non-text Content (A) Claude Vision
Plain language Grade 16 jargon rewritten to Grade 6 reading level 3.1.5 Reading Level (AAA) Claude
Audio Spoken article summary generated on demand 1.1.1 Non-text Content (A) Web Speech API
Captions Flat transcript → expressive captions with tone markers 1.2.2 Captions (A) Claude
Translation Dutch, Spanish, or French on demand 3.1.1 Language of Page (A) Claude
Contrast Text contrast raised from 2.8:1 (failing) to 12.6:1 (WCAG AAA) 1.4.6 Contrast Enhanced (AAA) CSS transform

Each toggle updates a live accessibility score and shows the before/after impact.


Screen reader simulation

The Screen Reader View panel shows the accessibility tree as VoiceOver or NVDA would traverse it:

  • Readable nodes — landmarks, headings, interactive controls (via aria-label), content, live regions, sr-only warnings
  • aria-hidden nodes — decorative elements that screen readers skip
  • Read aloud — chains SpeechSynthesisUtterance calls node by node, highlighting each row as it is spoken

Quick start

npm install
npm run dev

Create a .env file in the project root:

VITE_ANTHROPIC_API_KEY=your_key_here

Without the API key, the demo runs on pre-generated responses so the toggles still work.


Tech stack


Why architecture, not retrofit

Traditional software has one output channel. A website is pixels. If you cannot access that channel — because you are blind, have low literacy, use a slow connection, or are in a loud environment — the content is not for you.

Multimodal AI models accept any input and produce any output. Format becomes a response parameter, not a design phase. When accessibility is an architectural property rather than a feature, it is free at inference time.

The 94.8% failure rate is a failure of intent, not technology. The tools already exist.


Related


License

MIT — see LICENSE

Built by Oleksander Derechei

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors