Voice-first AI agent platform powered by DigitalOcean Gradientβ’ AI
Speak naturally. Get real-time spoken responses. Text always available.
Liquid is a voice-first AI agent platform built on DigitalOcean Gradientβ’ AI Platform. Instead of typing commands and reading text, you speak β and Liquid speaks back in real time. A full text interface is always available alongside voice.
Under the hood, Liquid runs a session-based multi-agent framework with a real-time graph executor, encrypted credential management, and live observability via Server-Sent Events β all powered by DigitalOcean's Gradient Serverless Inference and Agent Inference APIs.
Liquid is deeply integrated with DigitalOcean's Gradient AI Platform across the full stack:
| Integration Point | How Liquid Uses It |
|---|---|
| Serverless Inference | All LLM calls route through Gradient's serverless endpoint (inference.do-ai.run/v1) using models like llama3.3-70b-instruct, deepseek-r1, and others β no GPU provisioning needed |
| Agent Inference | Managed agents with knowledge bases, guardrails, and multi-agent routing via the Gradient Agent API |
| Gradient Python SDK | Native gradient SDK integration (pip install gradient) for both sync and async inference with streaming SSE support |
| App Platform Deployment | One-click deploy to DigitalOcean App Platform with auto-build from GitHub, health checks, and secret management |
| Single API Key | One GRADIENT_MODEL_ACCESS_KEY accesses all supported models (OpenAI, Anthropic, Meta, Mistral, DeepSeek) through a unified endpoint |
| Data Privacy | When using open-source models, data stays within DigitalOcean infrastructure β never used for training |
User speaks β Browser captures audio β WebSocket streams to backend
β Backend calls Gradient Serverless Inference API
β Model returns response via streaming SSE
β Audio plays back in real time + transcript in chat
β Queen agent orchestrates workers via Gradient Agent Inference
The Gradient Python SDK (from gradient import Gradient) powers all LLM interactions:
- Serverless Inference for direct model calls with
GRADIENT_MODEL_ACCESS_KEY - Agent Inference for managed agent workflows with
GRADIENT_AGENT_ACCESS_KEY - Streaming via SSE for real-time token delivery to the frontend
| ποΈΒ Β Voice-First Interaction | Click the mic, speak, and hear real-time spoken responses powered by Gradient AI inference |
| β¨οΈΒ Β Text Always Available | Type in the chat input at any time β voice and text work side by side |
| β‘Β Β Low-Latency Streaming | Bidirectional audio via WebSocket with Gradient SSE streaming for token delivery |
| π€Β Β Multi-Agent Graphs | Define goal-driven agents as node graphs; a Queen agent orchestrates workers via Gradient Agent Inference |
| πΒ Β Self-Improving Agents | On failure, the framework captures data, evolves the graph, and redeploys |
| π‘Β Β Real-Time Observability | Live SSE streaming of agent execution, node states, and decisions |
| π§βπ»Β Β Human-in-the-Loop | Intervention nodes pause execution for human input with configurable timeouts |
| πΒ Β Credential Management | Encrypted API key storage β add once, available everywhere |
| πΒ Β DigitalOcean Native | Deploy to App Platform, inference via Gradient, secrets managed by DO β fully integrated |
| Layer | Technology |
|---|---|
| AI Inference | DigitalOcean Gradientβ’ AI β Serverless Inference + Agent Inference |
| Gradient SDK | pip install gradient / npm install @digitalocean/gradient |
| Agent Runtime | Python 3.11 Β· aiohttp Β· async graph executor |
| Frontend | React 18 Β· TypeScript Β· Tailwind CSS Β· Vite |
| Streaming | WebSocket (voice) Β· Server-Sent Events (agent events) |
| LLM Models | Llama 3.3, DeepSeek, Mistral, GPT-4o, Claude β all via single Gradient endpoint |
| Deployment | DigitalOcean App Platform Β· Docker |
| Package Manager | uv |
- Python 3.11+
- Node.js 20+
- A DigitalOcean account with Gradient AI access β sign up here
- A Gradient Model Access Key β create one here
git clone https://github.com/Agentscreator/Liquid-AI.git
cd Liquid-AI
./quickstart.shThe quickstart script sets up:
- Agent runtime and graph executor (
core/.venv) - MCP tools for agent capabilities (
tools/.venv) - Encrypted credential store (
~/.hive/credentials) - All Python dependencies via
uv
Create a .env file in the project root:
# Required: Gradient Serverless Inference key
echo "GRADIENT_MODEL_ACCESS_KEY=your_key_here" > .env
# Optional: Gradient Agent Inference (for managed agents)
echo "GRADIENT_AGENT_ACCESS_KEY=your_agent_key" >> .env
echo "GRADIENT_AGENT_ENDPOINT=your_agent_endpoint" >> .envOr add it through the UI after starting: Settings β Credentials β Add GRADIENT_MODEL_ACCESS_KEY
Create or edit ~/.hive/configuration.json:
{
"llm": {
"provider": "gradient",
"model": "llama3.3-70b-instruct",
"api_key_env_var": "GRADIENT_MODEL_ACCESS_KEY"
}
}Available Gradient models include: llama3.3-70b-instruct, deepseek-r1, mistral-small-3.1-24b-instruct, and many more.
cd core
uv sync
uv run hive serverOpen http://localhost:8000 and you're ready to go.
- Open a session in the workspace
- Click the mic button next to the text input
- Speak β the mic pulses red while listening
- Liquid responds with audio; the speaker icon shows while it speaks
- Click the mic again (or the stop button) to end the voice session
Voice transcripts appear in the chat alongside text messages, so you always have a full written record.
# Install doctl CLI
brew install doctl # macOS
doctl auth init # authenticate
# Deploy with the included app spec
doctl apps create --spec do-app-spec.yamlOr use the deploy script:
GRADIENT_MODEL_ACCESS_KEY=your_key ./deploy-digitalocean.shdocker build -t liquid .
docker run -p 8787:8787 \
-e GRADIENT_MODEL_ACCESS_KEY=your_key \
liquidLiquid uses a Queen + Worker agent pattern, with all inference routed through DigitalOcean Gradient:
| Component | Role |
|---|---|
| Queen | Orchestrates conversation, delegates tasks, monitors worker output via Gradient Agent Inference |
| Workers | Execute specific goals as node graphs with tools, memory, and LLM access via Gradient Serverless Inference |
| Judge | Evaluates worker output against defined criteria and escalates failures |
| Event Bus | Pub/sub system streaming 25+ event types to the frontend in real time |
| Gradient Provider | Custom LLM provider (framework/llm/gradient.py) wrapping the official Gradient Python SDK |
Liquid-AI/
βββ core/
β βββ framework/ # Agent runtime, graph executor, API server
β β βββ server/ # aiohttp routes (REST + SSE + WebSocket)
β β βββ llm/ # LLM providers (Gradient, LiteLLM, Anthropic)
β β β βββ gradient.py # DigitalOcean Gradientβ’ AI provider
β β βββ runtime/ # Graph executor, event bus, session management
β βββ frontend/ # React + TypeScript UI
β βββ src/
β βββ components/ # ChatPanel, VoiceButton, AgentGraph, TopBarβ¦
β βββ hooks/ # useVoice, useSSE, useMultiSSE
β βββ pages/ # Home, Workspace, My Agents
βββ tools/ # MCP tool server
βββ exports/ # Your saved agents
βββ examples/ # Template agents
βββ deploy-digitalocean.sh # DigitalOcean App Platform deploy script
βββ do-app-spec.yaml # App Platform specification
βββ docs/ # Architecture docs and guides
βββ .env # Your API keys (gitignored)
| Variable | Required | Description |
|---|---|---|
GRADIENT_MODEL_ACCESS_KEY |
Yes | DigitalOcean Gradient Serverless Inference key |
GRADIENT_AGENT_ACCESS_KEY |
No | Gradient Agent Inference key (for managed agents) |
GRADIENT_AGENT_ENDPOINT |
No | Gradient Agent endpoint URL |
GOOGLE_API_KEY |
No | Gemini Live API access for voice features |
ANTHROPIC_API_KEY |
No | Enables Claude models (also available via Gradient) |
OPENAI_API_KEY |
No | Enables GPT models (also available via Gradient) |
HIVE_CREDENTIAL_KEY |
Auto | Auto-generated key that encrypts the credential store |
Agents live in exports/ as Python packages. Each agent defines a node graph in graph_spec.py:
graph = GraphSpec(
nodes=[
Node(id="my_node", system_prompt="You are a helpful assistant."),
]
)Or describe the agent you want in the home input β the Queen agent generates the graph and wiring automatically, using Gradient inference to power every step.
- Fork the repository
- Create a feature branch (
git checkout -b feature/my-feature) - Commit your changes
- Push and open a Pull Request
See CONTRIBUTING.md for detailed guidelines.
For security concerns, see SECURITY.md.
Never commit your
.envfile or API keys. The.envfile is gitignored by default.
Apache License 2.0 β see LICENSE for details.