Skip to content

egeominotti/bunqueue

Repository files navigation

bunqueue

npm version npm downloads CI GitHub Stars License

High-performance job queue for Bun. Built for AI agents and automation.
Zero external dependencies. MCP-native. TypeScript-first.

Documentation · Benchmarks · npm


Why bunqueue?

Library Requires
BullMQ Redis
Agenda MongoDB
pg-boss PostgreSQL
bunqueue Nothing
  • BullMQ-compatible API — Same Queue, Worker, QueueEvents
  • Zero dependencies — No Redis, no MongoDB
  • SQLite persistence — Survives restarts, WAL mode for concurrent access
  • Up to 286K ops/secVerified benchmarks
  • MCP server included — AI agents get full control: scheduling, monitoring, DLQ, cron, rate limits

When to use bunqueue

Great for:

  • AI agents that need a scheduler — cron jobs, delayed tasks, retries, all via MCP
  • Agentic workflows — agents push jobs, workers process, agents monitor results
  • Single-server deployments
  • Prototypes and MVPs
  • Moderate to high workloads (up to 286K ops/sec)
  • Teams that want to avoid Redis operational overhead
  • Embedded use cases (CLI tools, edge functions, serverless)

Not ideal for:

  • Multi-region distributed systems requiring HA
  • Workloads that need automatic failover today
  • Systems already running Redis with existing infrastructure

Why not just use BullMQ?

If you're already running Redis, BullMQ is great — battle-tested and feature-rich.

bunqueue is for when you don't want to run Redis. SQLite with WAL mode handles surprisingly high throughput for single-node deployments (tested up to 286K ops/sec). You get persistence, priorities, delays, retries, cron jobs, and DLQ — without the operational overhead of another service.

Install

bun add bunqueue

Requires Bun runtime. Node.js is not supported.

Two Modes

bunqueue runs in two modes depending on your architecture:

Embedded Server (TCP)
How it works Queue runs inside your process Standalone server, clients connect via TCP
Setup bun add bunqueue docker run or bunqueue start
Performance 286K ops/sec 149K ops/sec
Best for Single-process apps, CLIs, serverless Multiple workers, separate producer/consumer
Scaling Same process only Multiple clients across machines

Embedded Mode

Everything runs in your process. No server, no network, no setup.

import { Queue, Worker } from 'bunqueue/client';

const queue = new Queue('emails', { embedded: true });

const worker = new Worker(
  'emails',
  async (job) => {
    console.log('Processing:', job.data);
    return { sent: true };
  },
  { embedded: true }
);

await queue.add('welcome', { to: 'user@example.com' });

Server Mode (TCP)

Run bunqueue as a standalone server. Multiple workers and producers connect via TCP.

# Start with persistent data
docker run -d -p 6789:6789 -p 6790:6790 \
  -v bunqueue-data:/app/data \
  ghcr.io/egeominotti/bunqueue:latest

Connect from your app:

import { Queue, Worker } from 'bunqueue/client';

const queue = new Queue('tasks', { connection: { host: 'localhost', port: 6789 } });

const worker = new Worker(
  'tasks',
  async (job) => {
    return { done: true };
  },
  { connection: { host: 'localhost', port: 6789 } }
);

await queue.add('process', { data: 'hello' });

Performance

SQLite handles surprisingly high throughput for single-node deployments:

Mode Peak Throughput Use Case
Embedded 286K ops/sec Same process
TCP 149K ops/sec Distributed workers

Run bun run bench to verify on your hardware. Full benchmark methodology →

Built for AI Agents (MCP Server)

HTTP Handler Flow: Cron/Add Job → Queue → Embedded Worker → HTTP API → Job Result

bunqueue is the first job queue with native MCP support. AI agents get a full-featured scheduler, task queue, and monitoring system — no glue code needed.

HTTP Handlers solve a fundamental problem: an AI agent can schedule jobs and manage queues, but it cannot run a persistent worker. When the agent registers an HTTP handler, bunqueue spawns an embedded Worker that continuously pulls jobs and calls your HTTP endpoint. Responses are saved as results. Failed calls retry automatically via DLQ.

What AI agents can do with bunqueue:

  • Schedule tasks — cron jobs, delayed execution, recurring workflows
  • Manage job pipelines — push jobs, monitor progress, retry failures
  • Full pull/ack/fail cycle — agents can consume and process jobs directly
  • Monitor everything — stats, memory, Prometheus metrics, logs, DLQ
  • Control flow — pause/resume queues, set rate limits, manage concurrency
  • 73 MCP tools + 5 resources + 3 prompts — complete control over every feature
  • HTTP handlers — register a URL, bunqueue auto-processes jobs via HTTP calls
# One command to connect Claude Code
claude mcp add bunqueue -- bunx bunqueue-mcp
// Claude Desktop / Cursor / Windsurf — add to MCP config
{
  "mcpServers": {
    "bunqueue": {
      "command": "bunx",
      "args": ["bunqueue-mcp"]
    }
  }
}

Example agent interactions:

  • "Schedule a cleanup job every day at 3 AM"
  • "Add 500 email jobs to the queue with priority 10"
  • "Show me all failed jobs and retry them"
  • "Set rate limit to 50/sec on the api-calls queue"
  • "What's the memory usage and queue throughput?"

Supports embedded (local SQLite) and TCP (remote server) modes. Full MCP documentation →

Monitoring

# Start with Prometheus + Grafana
docker compose --profile monitoring up -d

Documentation

Read the full documentation →

License

MIT