Sequencer: Visual AI Agent Workflow Orchestrator (MAC only, cross platform capabilities coming soon)
Sequencer is a local-first workflow orchestrator for chaining prompts, agents, and OpenAI-compatible LLM calls into repeatable pipelines.
Most developers use AI agents (like Cline or Aider) in a linear chat. Sequencer moves you to an assembly line model:
- Design: Create a sequence of tasks (prompts).
- Assign: Choose the best agent for each specific task.
- Start Sequence: Execute the entire pipeline in one click, with real-time status tracking for every step.
- Visual Pipeline Editor: Drag-and-drop interface to reorder tasks and refine your workflow.
- Multi-Agent Coordination: Assign different agents to different steps within a single project sequence.
- Hybrid LLM Support: Route requests through local servers (LM Studio) for privacy and cost, or connect to enterprise APIs for maximum intelligence.
- Real-time Orchestration: Monitor the state of your pipeline (
Pending,Running,Done/Failed) as it executes. - Project-Based Management: Organize different sequences into dedicated projects.
- Transparent Logging: Every exchange is captured in JSON format for audit and optimization.
I found myself manually coordinating workflows between LM Studio, coding agents, local models, and scripts. Repeating the same multi-step AI tasks became tedious.
Sequencer is my attempt to turn those workflows into autonomous pipelines that work across both local and cloud-based models.
- Installation
- Quick Start
- Docker Deployment
- Configuration
- Using the Sequencer
- OpenClaw Integration
- API Reference
- Project Structure
- Development
- Future Roadmap
- Node.js 18+ and npm
- (Optional) Cline CLI for Cline agent tasks
- (Optional) Aider CLI for Aider agent tasks
-
Clone the repository:
git clone https://github.com/mdoty4/sequencer.git cd sequencer -
Install dependencies:
npm install
-
Configure environment variables: Create a
.envfile in the root directory:PORT=4321 LM_STUDIO_URL=http://localhost:1234/v1
-
Start the server:
npm start
The server will start on
http://localhost:4321.
Run Sequencer in a container with a single command:
docker compose up --build -dThe server will be available at http://localhost:4321.
- Logs are persisted in the
./logsdirectory on the host - The
.envfile is mounted read-only into the container - A health check is configured at
/health - Agent CLI tools (cline, aider) must be available inside the container for agent tasks to execute. For UI-only usage the container works as-is.
# Start in background
docker compose up -d
# Stop
docker compose down
# Rebuild and start
docker compose up --build -d
# View logs
docker compose logs -f
# Remove container and volumes
docker compose down -v| Variable | Default | Description |
|---|---|---|
PORT |
4321 |
Port the Sequencer server listens on |
LM_STUDIO_URL |
http://localhost:1234/v1 |
Base URL for LM Studio API |
Configure your agents through the web UI at Settings:
- LLM Settings: API base URL, API key, model selection
- Telegram: Bot token and chat ID for Telegram agent
- Per-project overrides: Each project can have its own LLM configuration
To route an agent's requests through the Sequencer:
- Set the API Provider to
OpenAI Compatible - Set the Base URL to
http://localhost:4321/
- Navigate to the Projects tab and activate a project
- In the Pipeline editor, add prompt rows
- Assign an agent (Aider or Cline) to each row
- Toggle the "Orchestrate" switch for the tasks you want to include in the sequence
Click โถ Start Sequence. Sequencer will execute the selected prompts in order, managing the hand-off between agents and tracking progress in real-time.
Sequencer includes a skill.md file that allows OpenClaw (and other AI agents) to discover and interact with Sequencer automatically. By providing the skill file, OpenClaw can:
- Start, stop, and restart the Sequencer server
- Create and manage projects and pipelines
- Assign agents and execute orchestration workflows
- Troubleshoot failed pipelines and review session logs
To use with OpenClaw, simply point it to the skill.md file in the project root. OpenClaw will use the defined workflows and API endpoints to control Sequencer programmatically.
Base URL: http://localhost:4321
| Method | Endpoint | Description |
|---|---|---|
GET |
/health |
Server health status with uptime and version |
Response:
{
"status": "ok",
"uptime": 1234.56,
"timestamp": "2025-04-27T20:00:00.000Z",
"version": "1.0.0"
}| Method | Endpoint | Description |
|---|---|---|
GET |
/api/projects |
List all projects and active project |
POST |
/api/projects |
Create a new project |
PUT |
/api/projects/:id |
Update an existing project |
DELETE |
/api/projects/:id |
Delete a project |
POST |
/api/projects/active |
Set the active project |
Create Project Request:
{
"name": "My Project",
"workingDirectory": "../my-project",
"aiderConfig": { }
}| Method | Endpoint | Description |
|---|---|---|
GET |
/api/project/:id/tasks |
Get tasks for a project |
POST |
/api/project/:id/tasks |
Update tasks for a project |
POST |
/api/project/:id/tasks/orchestrate |
Start orchestration with selected tasks |
POST |
/api/project/:id/tasks/reset |
Reset all task states to pending |
POST |
/api/project/:id/tasks/cancel |
Cancel running orchestration |
GET |
/api/project/:id/tasks/stream |
SSE stream for real-time orchestration events |
Orchestrate Request:
{
"taskIndices": [0, 1, 2]
}| Method | Endpoint | Description |
|---|---|---|
POST |
/api/project/:id/tasks/:taskIndex/send |
Send task to configured agent |
POST |
/api/project/:id/tasks/:taskIndex/aider |
Send task specifically to Aider |
POST |
/api/project/:id/tasks/:taskIndex/init |
Initialize git in working directory |
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/chat |
Stream a response from the configured LLM (SSE) |
POST |
/api/cline/headless |
Run Cline CLI in headless mode with streaming (SSE) |
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/config |
Get current Aider and Telegram config |
POST |
/api/config |
Save Aider and Telegram config |
POST |
/api/telegram/test |
Send a test message via Telegram |
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/logs |
List all log sessions |
GET |
/api/logs/:id |
Get events for a specific log session |
DELETE |
/api/logs/:id |
Delete a specific log |
POST |
/api/logs/bulk-delete |
Delete multiple logs at once |
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/proxy/status |
Check if LM Studio is reachable |
GET |
/api/status |
Check proxy/LM Studio status |
| Method | Endpoint | Description |
|---|---|---|
POST |
/v1/chat/completions |
Proxy for OpenAI-compatible chat completions |
sequencer/
โโโ sequencer.js # Main server: Express routes, agent orchestration, execution engine
โโโ app.js # Additional app logic
โโโ skill.md # OpenClaw skill file for agent integration
โโโ index.html # Frontend entry point
โโโ styles.css # Application styles
โโโ prompts.json # Project state, tasks, and configuration storage
โโโ .env # Environment variables (PORT, LM_STUDIO_URL)
โโโ Dockerfile # Docker image definition
โโโ docker-compose.yml # Docker Compose configuration
โโโ modules/ # Frontend JavaScript modules
โ โโโ chat.js # Chat interface logic
โ โโโ core.js # Core system operations
โ โโโ dom-helpers.js # DOM manipulation utilities
โ โโโ json-viewer.js # JSON log viewer
โ โโโ pipeline.js # Visual pipeline editor
โ โโโ project-editor.js # Project editing UI
โ โโโ projects.js # Project management
โ โโโ search.js # Search functionality
โ โโโ sessions.js # Session management
โ โโโ settings.js # Settings panel
โ โโโ terminal.js # Terminal output display
โโโ logs/ # Agent exchange logs (JSONL format)
Auto-restart on code changes with nodemon:
npm run devprompts.json: Stores all projects, tasks, agent config, and execution statelogs/*.json: Agent session logs in JSONL format (one JSON object per line)
Session logs follow the pattern:
{projectTitle}_{agentName}_task_{taskIndex}_{timestamp}.json
Example: testbench_cline_task_0_2026-04-27T02-22-13.json
- Enterprise Gateway: Native support for OpenAI, Anthropic, and Azure API keys.
- Template Library: Shareable "Golden Pipelines" for common development tasks.
- Conditional Logic: Branching pipelines based on the output of a previous step.
- MCP Server: Expose Sequencer capabilities as MCP tools for AI agents.
MIT - See LICENSE for details.
Michael Doty
- Email: michaeldoty.pro@gmail.com
- GitHub: mdoty4