Skip to content

Sahanon-P/fable

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fable

A voice-first iOS AI assistant that lets you talk to Claude through natural speech. Hold to talk, release to send — Fable transcribes your voice, routes it to an AI gateway, and speaks the response back using ElevenLabs TTS.

Features

  • Push-to-talk voice input via Apple's on-device Speech framework
  • Text input as an alternative to voice
  • AI responses via Claude (routed through an OpenClaw gateway over WebSocket)
  • Text-to-speech playback powered by ElevenLabs
  • Rich response cards for calendars, lists, maps, and code blocks
  • Conversation history within a session

Architecture

The app follows MVVM with a three-layer service architecture:

FableApp
└── ContentView
    └── FableViewModel (@MainActor)
        ├── GatewayService   — WebSocket connection to OpenClaw
        ├── SpeechService    — Microphone + on-device speech-to-text
        └── TTSService       — ElevenLabs HTTP + AVAudioPlayer

Conversation state machine:

idle → listening → thinking → responding → idle

See ARCHITECTURE.md for a deeper dive.

Setup

Prerequisites

  • Xcode 15+
  • An OpenClaw gateway running (WebSocket endpoint)
  • An ElevenLabs API key and voice ID

Configuration

Create fable/Secrets.xcconfig (already excluded from git):

GATEWAY_URL=wss://your-gateway-host/ws
GATEWAY_TOKEN=your-bearer-token
ELEVENLABS_API_KEY=your-elevenlabs-key
ELEVENLABS_VOICE_ID=your-voice-id

These values are injected into Info.plist at build time and read by Config.swift.

Build & Run

  1. Clone the repo
  2. Create fable/Secrets.xcconfig as above
  3. Open fable.xcodeproj in Xcode
  4. Select your target device or simulator
  5. Build and run (⌘R)

Microphone permission is required. Accept the system prompt on first launch.

Project Structure

fable/
├── fableApp.swift              # App entry point
├── ContentView.swift           # Root view + ViewModel ownership
├── Config.swift                # Secrets loader from Info.plist
├── Models/
│   └── FableResponse.swift     # Message, DisplayPayload, card types
├── ViewModels/
│   └── FableViewModel.swift    # State machine + service coordination
├── Services/
│   ├── GatewayService.swift    # WebSocket client
│   ├── SpeechService.swift     # Speech recognition
│   └── TTSService.swift        # ElevenLabs TTS
└── Views/
    ├── ConversationView.swift  # Main chat UI
    ├── MessageRow.swift        # Message bubble + card router
    ├── OrbView.swift           # Animated orb (unused)
    └── Cards/
        ├── CalendarCard.swift
        ├── ListCard.swift
        ├── CodeCard.swift
        └── MapCard.swift

Tech Stack

Concern Technology
Language Swift / SwiftUI
Speech-to-text Apple Speech framework
Audio AVFoundation
Networking URLSession WebSocket
Concurrency Swift async/await + Combine
AI backend Claude via OpenClaw gateway
Text-to-speech ElevenLabs API

About

a self-hosted voice AI assistant

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages