Skip to content

johnsonfarmsus/ouridentify

OurIdentify

Community-Powered AI for Blind Accessibility

A fully autonomous, community-driven object identification system using GitHub Actions, Bluesky, and Grove Vision AI V2.

License: AGPL v3 Status Community


What is OurIdentify?

OurIdentify is an open-source, portable device that helps blind and low-vision users identify objects in their daily lives. What makes it unique is that the community directly improves the AI model through natural conversation on Bluesky.

How It Works

  1. You use the device - Point at an object, press the power button, and hear what it is
  2. Community helps improve it - When the device is uncertain, press a button to ask the community on Bluesky
  3. AI learns automatically - Every day, the model trains on community feedback and updates your device
  4. Everyone benefits - As the community helps, all devices get smarter together

Key Features

  • βœ… Portable & Battery-Powered - Take it anywhere (kitchen, bathroom, office, travel)
  • βœ… 3-Minute Sessions - Continuous identification mode with auto-power-off
  • βœ… Privacy-First - You control what gets posted; images only shared when you press a button
  • βœ… Community-Powered - Blind users and allies directly improve the model through Bluesky
  • βœ… Fully Autonomous - GitHub Actions automatically trains and deploys new models
  • βœ… Zero Cost Forever - Runs entirely on free GitHub infrastructure ($0/month)
  • βœ… Transparent & Auditable - Everything is open source, no black boxes
  • βœ… Built to Last - Survives individual developers leaving; community-owned forever

Quick Start

For Users

Status: Currently in Phase 0 (Hardware Validation)

We're validating the hardware and building the initial prototype. If you're interested in being an early tester:

  1. ⭐ Star this repository to stay updated
  2. πŸ“– Read the Complete Guide to understand the system
  3. πŸ’¬ Join the conversation on Bluesky: @ouridentify.bsky.social
  4. πŸ“ Open a GitHub Issue to express interest in testing

For Developers & Contributors

We need your help with:

  • πŸ”§ Hardware validation (ESP32-S3 + Grove Vision AI V2)
  • πŸ€– YOLOv8 training pipeline development
  • πŸ“± ESP32 firmware development
  • πŸ”„ GitHub Actions workflow creation
  • 🌐 Bluesky API integration
  • πŸ“š Documentation and accessibility guides
  • 🎨 3D-printed enclosure design

See CONTRIBUTING.md for how to get started.


Project Status

Phase Status Description
Phase 0 🟑 In Progress Hardware validation & initial dataset
Phase 1 πŸ”΄ Not Started Hardware assembly & firmware MVP
Phase 2 πŸ”΄ Not Started Full automation (GitHub Actions + Bluesky)
Phase 3 πŸ”΄ Not Started Optimization & production-ready
Phase 4 πŸ”΄ Not Started Community growth & expansion

Current Milestone: Validating ESP32-S3 ↔ Grove Vision AI V2 communication at 921,600 baud


How Does It Work?

The Device

  • ESP32-S3 - Main processor with WiFi
  • Grove Vision AI V2 - Camera with on-device AI inference
  • 4 Buttons - Power, "Too Generic", "Incorrect", Spare
  • Speaker - Audio feedback via text-to-speech
  • Portable - Battery-powered, ~3-hour runtime per charge

The Workflow

1. User presses Power button
   ↓
2. Device runs for 3 minutes, identifying objects every 5 seconds
   ↓
3. If device is wrong or too generic, user presses a button
   ↓
4. Device posts image to Bluesky asking community for help
   ↓
5. Community replies with correct identification
   ↓
6. GitHub Actions collects replies daily (2 AM Pacific)
   ↓
7. When 10+ images across 3+ categories collected, model retrains
   ↓
8. Devices auto-update at midnight (local time)
   ↓
9. Everyone's device is now smarter!

The Infrastructure

Everything runs on free, public infrastructure:

  • GitHub Actions - Automated training (2,000 min/month free)
  • GitHub Releases - Model distribution (free CDN)
  • Bluesky - Community feedback (free, decentralized protocol)
  • ESP32 - Local inference (no cloud costs)

Total Cost: $0/month. Forever.


Why This Matters

For Blind Users

  • Independence - Identify items in your kitchen, bathroom, office without assistance
  • Privacy - On-device processing; you control what gets shared
  • Trust - Open source means no exploitation, no hidden agendas
  • Ownership - Community-driven means it's built for you, by you

For the Community

  • Sustainable - Designed to outlive any single developer or organization
  • Accessible - Built specifically with accessibility in mind, not retrofitted
  • Transparent - Every decision, every line of code, every model update is public
  • Empowering - Blind users directly improve the AI through natural interaction

For Developers

  • Novel Architecture - Reference implementation for sustainable, community-driven AI
  • Zero-Cost ML Ops - Demonstrates how to run ML training/deployment on free infrastructure
  • Ethical AI - Model where users have agency and control over their data
  • Extensible - Easy to fork for other use cases (education, specialized workplaces, etc.)

Documentation

Document Description
ouridentify_guide.md Complete technical guide (architecture, hardware, workflows)
docs/SETUP.md Hardware assembly and firmware flashing guide
docs/CONTRIBUTING.md How to contribute to the project
docs/FIRMWARE.md ESP32 firmware development guide
docs/ARCHITECTURE.md System design details
docs/BLUESKY_WORKFLOW.md How Bluesky integration works

Hardware Requirements

Total Cost Per Device: ~$60

Component Purpose Cost
ESP32-S3-DevKitC-1 Main processor $5
Grove Vision AI V2 Camera + ML accelerator $30
4 Buttons User input $5
Speaker with Amp Audio feedback $5
USB-C Cable Power + data $5
Power Bank Portable power $15

See docs/SETUP.md for detailed assembly instructions.


Technology Stack

Hardware

  • ESP32-S3 - Main MCU with WiFi/Bluetooth
  • Grove Vision AI V2 - Himax WE2 chip with on-device inference
  • UART @ 921,600 baud - Communication between ESP32 and Grove

Software

  • YOLOv8 Nano - Object detection model
  • TensorFlow Lite - Inference on embedded devices
  • Ethos-U Vela - Model optimization for Grove's NPU
  • Arduino/ESP-IDF - ESP32 firmware framework

Infrastructure

  • GitHub Actions - Automated training and deployment
  • GitHub Releases - Model distribution CDN
  • Bluesky ATProto - Community feedback protocol
  • Python 3.11+ - Training scripts

Community & Support

Get Involved

Code of Conduct

We are committed to providing a welcoming and inclusive environment. Please read our Code of Conduct before contributing.


Roadmap

Phase 1: Kitchen Module (Weeks 1-6)

  • βœ… Hardware validated
  • βœ… UART communication working
  • βœ… YOLOv8 exports to UF2
  • βœ… ESP32 can flash models
  • 🟑 50+ kitchen items trained
  • 🟑 End-to-end automation working

Phase 2: Bathroom Module (Month 3-6)

  • Medications and toiletries
  • Safety-critical items (allergens, expiration dates)
  • Builds on kitchen success

Phase 3: Office Module (Month 7-12)

  • Office supplies, documents, equipment
  • Workplace accessibility

Beyond

  • Community-maintained forks for specialized use cases
  • User-trained personal collections
  • Multi-language support

Sustainability Model

This project is designed to run forever, even if all original developers leave.

How?

  1. No Personal Servers - Everything runs on GitHub (free tier) and Bluesky (free protocol)
  2. Open Source - Anyone can fork, maintain, and improve
  3. Community-Owned - Blind users directly control model improvements
  4. Graceful Degradation - Devices work offline; updates optional
  5. Fork-Friendly - Modular architecture makes extending/adapting easy

If GitHub Changes Pricing?

  • Move to GitLab (similar free tier)
  • All code/models are portable
  • No vendor lock-in
  • Community maintains independence

The goal: Build something that outlives us.


License

This project is open source under the GNU Affero General Public License v3.0 (AGPL-3.0).

What this means:

  • βœ… Use it for any purpose
  • βœ… Study and modify the source code
  • βœ… Distribute copies and modifications
  • βœ… Strong copyleft: Modifications must also be AGPL-3.0
  • βœ… Network use = distribution: If you run a modified version on a server, you must make the source available
  • βœ… Protects community ownership forever

Acknowledgments

  • Built for and with the blind and low-vision community
  • Thanks to Seeed Studio for Grove modules and documentation
  • Thanks to Ultralytics for YOLOv8 and making ML accessible
  • Thanks to Bluesky for the open ATProto protocol
  • Thanks to the open source community for TensorFlow, Ethos-U, and all supporting tools

Contact


"Our community, our tool, our independence"

⭐ Star this repo to support accessible AI πŸ”— Share with anyone who might benefit πŸ’¬ Join the conversation on Bluesky

About

Community-Powered Vision for Blind and Reduced Vision Accessibility

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors