Skip to content

Agentscreator/Liquid-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Liquid

Voice-first AI agent platform powered by DigitalOcean Gradientβ„’ AI
Speak naturally. Get real-time spoken responses. Text always available.

License Python 3.11+ Node 20+ DigitalOcean Gradient AI React 18


Overview

Liquid is a voice-first AI agent platform built on DigitalOcean Gradientβ„’ AI Platform. Instead of typing commands and reading text, you speak β€” and Liquid speaks back in real time. A full text interface is always available alongside voice.

Under the hood, Liquid runs a session-based multi-agent framework with a real-time graph executor, encrypted credential management, and live observability via Server-Sent Events β€” all powered by DigitalOcean's Gradient Serverless Inference and Agent Inference APIs.


🌊 DigitalOcean Gradientβ„’ AI Integration

Liquid is deeply integrated with DigitalOcean's Gradient AI Platform across the full stack:

Integration Point How Liquid Uses It
Serverless Inference All LLM calls route through Gradient's serverless endpoint (inference.do-ai.run/v1) using models like llama3.3-70b-instruct, deepseek-r1, and others β€” no GPU provisioning needed
Agent Inference Managed agents with knowledge bases, guardrails, and multi-agent routing via the Gradient Agent API
Gradient Python SDK Native gradient SDK integration (pip install gradient) for both sync and async inference with streaming SSE support
App Platform Deployment One-click deploy to DigitalOcean App Platform with auto-build from GitHub, health checks, and secret management
Single API Key One GRADIENT_MODEL_ACCESS_KEY accesses all supported models (OpenAI, Anthropic, Meta, Mistral, DeepSeek) through a unified endpoint
Data Privacy When using open-source models, data stays within DigitalOcean infrastructure β€” never used for training

How It Works with Gradient

User speaks β†’ Browser captures audio β†’ WebSocket streams to backend
           β†’ Backend calls Gradient Serverless Inference API
           β†’ Model returns response via streaming SSE
           β†’ Audio plays back in real time + transcript in chat
           β†’ Queen agent orchestrates workers via Gradient Agent Inference

The Gradient Python SDK (from gradient import Gradient) powers all LLM interactions:

  • Serverless Inference for direct model calls with GRADIENT_MODEL_ACCESS_KEY
  • Agent Inference for managed agent workflows with GRADIENT_AGENT_ACCESS_KEY
  • Streaming via SSE for real-time token delivery to the frontend

✨ Features

πŸŽ™οΈΒ Β Voice-First Interaction Click the mic, speak, and hear real-time spoken responses powered by Gradient AI inference
⌨️  Text Always Available Type in the chat input at any time β€” voice and text work side by side
⚑  Low-Latency Streaming Bidirectional audio via WebSocket with Gradient SSE streaming for token delivery
πŸ€–Β Β Multi-Agent Graphs Define goal-driven agents as node graphs; a Queen agent orchestrates workers via Gradient Agent Inference
πŸ”„Β Β Self-Improving Agents On failure, the framework captures data, evolves the graph, and redeploys
πŸ“‘Β Β Real-Time Observability Live SSE streaming of agent execution, node states, and decisions
πŸ§‘β€πŸ’»Β Β Human-in-the-Loop Intervention nodes pause execution for human input with configurable timeouts
πŸ”Β Β Credential Management Encrypted API key storage β€” add once, available everywhere
🌊  DigitalOcean Native Deploy to App Platform, inference via Gradient, secrets managed by DO β€” fully integrated

πŸ›  Tech Stack

Layer Technology
AI Inference DigitalOcean Gradientβ„’ AI β€” Serverless Inference + Agent Inference
Gradient SDK pip install gradient / npm install @digitalocean/gradient
Agent Runtime Python 3.11 Β· aiohttp Β· async graph executor
Frontend React 18 Β· TypeScript Β· Tailwind CSS Β· Vite
Streaming WebSocket (voice) Β· Server-Sent Events (agent events)
LLM Models Llama 3.3, DeepSeek, Mistral, GPT-4o, Claude β€” all via single Gradient endpoint
Deployment DigitalOcean App Platform Β· Docker
Package Manager uv

πŸš€ Quick Start

Prerequisites

  • Python 3.11+
  • Node.js 20+
  • A DigitalOcean account with Gradient AI access β€” sign up here
  • A Gradient Model Access Key β€” create one here

1. Clone and install

git clone https://github.com/Agentscreator/Liquid-AI.git
cd Liquid-AI
./quickstart.sh

The quickstart script sets up:

  • Agent runtime and graph executor (core/.venv)
  • MCP tools for agent capabilities (tools/.venv)
  • Encrypted credential store (~/.hive/credentials)
  • All Python dependencies via uv

2. Add your Gradient API key

Create a .env file in the project root:

# Required: Gradient Serverless Inference key
echo "GRADIENT_MODEL_ACCESS_KEY=your_key_here" > .env

# Optional: Gradient Agent Inference (for managed agents)
echo "GRADIENT_AGENT_ACCESS_KEY=your_agent_key" >> .env
echo "GRADIENT_AGENT_ENDPOINT=your_agent_endpoint" >> .env

Or add it through the UI after starting: Settings β†’ Credentials β†’ Add GRADIENT_MODEL_ACCESS_KEY

3. Configure Gradient as your LLM provider

Create or edit ~/.hive/configuration.json:

{
  "llm": {
    "provider": "gradient",
    "model": "llama3.3-70b-instruct",
    "api_key_env_var": "GRADIENT_MODEL_ACCESS_KEY"
  }
}

Available Gradient models include: llama3.3-70b-instruct, deepseek-r1, mistral-small-3.1-24b-instruct, and many more.

4. Start the server

cd core
uv sync
uv run hive server

Open http://localhost:8000 and you're ready to go.


πŸŽ™οΈ Voice

  1. Open a session in the workspace
  2. Click the mic button next to the text input
  3. Speak β€” the mic pulses red while listening
  4. Liquid responds with audio; the speaker icon shows while it speaks
  5. Click the mic again (or the stop button) to end the voice session

Voice transcripts appear in the chat alongside text messages, so you always have a full written record.


🌊 Deploy to DigitalOcean

Option A: App Platform (recommended)

# Install doctl CLI
brew install doctl    # macOS
doctl auth init       # authenticate

# Deploy with the included app spec
doctl apps create --spec do-app-spec.yaml

Or use the deploy script:

GRADIENT_MODEL_ACCESS_KEY=your_key ./deploy-digitalocean.sh

Option B: Docker on a Droplet

docker build -t liquid .
docker run -p 8787:8787 \
  -e GRADIENT_MODEL_ACCESS_KEY=your_key \
  liquid

πŸ— Architecture

Liquid uses a Queen + Worker agent pattern, with all inference routed through DigitalOcean Gradient:

Component Role
Queen Orchestrates conversation, delegates tasks, monitors worker output via Gradient Agent Inference
Workers Execute specific goals as node graphs with tools, memory, and LLM access via Gradient Serverless Inference
Judge Evaluates worker output against defined criteria and escalates failures
Event Bus Pub/sub system streaming 25+ event types to the frontend in real time
Gradient Provider Custom LLM provider (framework/llm/gradient.py) wrapping the official Gradient Python SDK

πŸ“ Project Structure

Liquid-AI/
β”œβ”€β”€ core/
β”‚   β”œβ”€β”€ framework/            # Agent runtime, graph executor, API server
β”‚   β”‚   β”œβ”€β”€ server/           # aiohttp routes (REST + SSE + WebSocket)
β”‚   β”‚   β”œβ”€β”€ llm/              # LLM providers (Gradient, LiteLLM, Anthropic)
β”‚   β”‚   β”‚   └── gradient.py   # DigitalOcean Gradientβ„’ AI provider
β”‚   β”‚   └── runtime/          # Graph executor, event bus, session management
β”‚   └── frontend/             # React + TypeScript UI
β”‚       └── src/
β”‚           β”œβ”€β”€ components/   # ChatPanel, VoiceButton, AgentGraph, TopBar…
β”‚           β”œβ”€β”€ hooks/        # useVoice, useSSE, useMultiSSE
β”‚           └── pages/        # Home, Workspace, My Agents
β”œβ”€β”€ tools/                    # MCP tool server
β”œβ”€β”€ exports/                  # Your saved agents
β”œβ”€β”€ examples/                 # Template agents
β”œβ”€β”€ deploy-digitalocean.sh    # DigitalOcean App Platform deploy script
β”œβ”€β”€ do-app-spec.yaml          # App Platform specification
β”œβ”€β”€ docs/                     # Architecture docs and guides
└── .env                      # Your API keys (gitignored)

βš™οΈ Configuration

Variable Required Description
GRADIENT_MODEL_ACCESS_KEY Yes DigitalOcean Gradient Serverless Inference key
GRADIENT_AGENT_ACCESS_KEY No Gradient Agent Inference key (for managed agents)
GRADIENT_AGENT_ENDPOINT No Gradient Agent endpoint URL
GOOGLE_API_KEY No Gemini Live API access for voice features
ANTHROPIC_API_KEY No Enables Claude models (also available via Gradient)
OPENAI_API_KEY No Enables GPT models (also available via Gradient)
HIVE_CREDENTIAL_KEY Auto Auto-generated key that encrypts the credential store

🧩 Building Agents

Agents live in exports/ as Python packages. Each agent defines a node graph in graph_spec.py:

graph = GraphSpec(
    nodes=[
        Node(id="my_node", system_prompt="You are a helpful assistant."),
    ]
)

Or describe the agent you want in the home input β€” the Queen agent generates the graph and wiring automatically, using Gradient inference to power every step.


🀝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/my-feature)
  3. Commit your changes
  4. Push and open a Pull Request

See CONTRIBUTING.md for detailed guidelines.


πŸ”’ Security

For security concerns, see SECURITY.md.

Never commit your .env file or API keys. The .env file is gitignored by default.


πŸ“„ License

Apache License 2.0 β€” see LICENSE for details.

About

Liquid AI transforms natural language goals into autonomous multi-agent workflows that run on DigitalOcean Gradient AI infrastructure.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors