Skip to content

mj-deving/code-first-n8n

Repository files navigation

Code-First n8n

n8n License: MIT Workflows Token Savings

96% fewer tokens. 80% faster. 86% fewer nodes. Make n8n fully code-first -- from authoring through production.

This repo proves that two tools together cover the entire n8n workflow lifecycle without clicking:

  • n8nac — code-first development (write, deploy, test, debug from terminal)
  • code-mode — code-first runtime (collapse N LLM calls → 1 sandboxed execution)

Evolution

This project didn't start here. It evolved through three phases:

n8n UI only (2025)            n8nac adoption (2025-2026)     code-mode integration (2026)
─────────────────────    →    ─────────────────────    →    ──────────────────────────────
Built workflows by clicking   Adopted n8n-as-code (n8nac).   Added code-mode runtime.
in the n8n editor. 16-node    TypeScript workflows, CLI      Sandboxed execution collapses
WFs, manual testing, no       push/pull/verify. Our POC      N LLM calls into one. Full
version control. Fragile.     repo: n8n-autopilot.           lifecycle proven here.

Phase 1 proved the pain — complex workflows are unmaintainable in a visual UI. Phase 2 solved dev-time by adopting n8nac for code-first authoring. Phase 3 solved runtime by adding code-mode — collapsing N tool calls into one sandboxed execution. This proving ground brings both together and measures the results.

The Lifecycle

Layer Tool Status
Write workflows n8nac TypeScript (.workflow.ts) Code-first today
Deploy workflows n8nac push CLI Code-first today
Test workflows code-mode test harness Benchmarked (POC-01)
Debug workflows code-mode trace + replay Built into engine
Runtime execution code-mode sandbox 96% token savings (benchmarks)
Visual UI Verification only Still there when you need it

Benchmark Results

5-tool customer onboarding pipeline — validate email → classify company → score tier → generate message → format report:

Metric Traditional Code-Mode Savings
LLM API calls 11 1 91%
Total tokens ~18,000 ~700 96%
Execution time 12.5s 2.5s 80%
n8n nodes 22 3 86%

POC Dashboard

Each POC proves one layer of the thesis with real data:

POC What It Proves Status
01 — Customer Onboarding Runtime: 96% token savings Completebenchmarked
02 — MCP Filesystem Real file operations through MCP in sandbox Completeverified
03 — Multi-Agent Dispatch 16-node workflow → 3 nodes (81% reduction) Completeimplemented
04 — Dev Loop Full lifecycle in one prompt (11.5s, $0.05) CompleteE2E proven
05 — E2E Sibling Tools Zero-config tool discovery + execution Complete8/8 pass

Repo Map

Directory What
workflows/ POC workflow directories (the proving ground)
n8n-nodes-utcp-codemode/ npm monorepo: @code-mode/core SDK + n8n community node
code-mode-mcp-server/ Standalone MCP server wrapping CodeModeEngine (separate npm package)
repo/ Cloned upstream UTCP code-mode library (read-only reference)
n8n-autopilot/ Cloned n8nac (read-only reference)
playbook/ Portable knowledge: lifecycle framing, benchmarks, architecture
docs/ ADRs and design documents
template/ Scaffold source for new workflow directories
scripts/ Tooling (new-workflow, check-secrets)
archive/ Original research artifacts from exploration phase
null/ Empty placeholder (artifact from early scaffolding)

Install

n8n Community Node

# In n8n: Settings → Community Nodes → Install
n8n-nodes-utcp-codemode

Or via npm for self-hosted:

cd ~/.n8n
npm install n8n-nodes-utcp-codemode
# Restart n8n

MCP Server (Claude Desktop, Cursor, any MCP client)

npm install -g code-mode-tools

Add to your MCP config:

{
  "mcpServers": {
    "code-mode": {
      "command": "code-mode-tools",
      "args": ["--config", "tools.json"]
    }
  }
}

The same binary works as a CLI (see ADR-0001):

# Execute a code chain directly from terminal or any AI agent's Bash tool
code-mode-tools exec "const files = fs.filesystem_list_directory({ path: '.' }); return files;" --config tools.json

# Discover available tools (human-readable or --json for machines)
code-mode-tools list-tools --config tools.json

See code-mode-tools for full setup.

How It Works

Traditional AI Agent:
  LLM → tool call → LLM → tool call → LLM → tool call → ... → answer
  (11 LLM calls, ~18K tokens, O(n²) context growth)

Code-Mode:
  LLM → writes TypeScript → sandbox executes all tools → answer
  (1 LLM call, ~700 tokens, O(1) constant)

The LLM sees one tool: execute_code_chain. It writes a TypeScript block that calls all available tools directly inside an isolated-vm sandbox. Tool results flow back as return values, not as LLM context.

With MCP Tools

// LLM writes this TypeScript, sandbox executes it:
const files = fs.filesystem_list_directory({ path: "/data" });
const content = fs.filesystem_read_file({ path: files[0] });
return { files: files.length, firstFile: content };

With Sibling Tools (auto-registered)

// Calculator Tool connected as sibling — zero config needed:
const sum = sibling.calculator({ a: 100, b: 200 });
return { result: sum }; // { sum: 300 }

Published Packages

Package Version What
n8n-nodes-utcp-codemode 2.1.0 n8n community node
code-mode-tools 0.2.0 Standalone MCP server

Built on top of @utcp/code-mode (upstream library by UTCP).

Create a New Workflow

./scripts/new-workflow.sh agents/06-slack-triage "Slack Message Triage"

This scaffolds a complete workflow directory from template/ with README, workflow.ts skeleton, test.json stub, and all sections pre-filled. See docs/TEMPLATE.md for the workflow template requirements.

Workflow lifecycle: Develop in this monorepo → prove with benchmarks → distribute via n8n community templates and blog posts.

Deep Dives

Document What's Inside
docs/TEMPLATE.md Workflow template requirements and lifecycle
Playbook: Lifecycle Portable framing of the code-first n8n story
Playbook: Benchmarks Token savings data, methodology, cost projections
Playbook: Architecture How @code-mode/core, n8n node, and MCP server fit together

Contributing

# Build n8n community node (monorepo)
cd n8n-nodes-utcp-codemode && npm run build && npm test

# Scaffold a new workflow
./scripts/new-workflow.sh agents/06-slack-triage "Slack Message Triage"

# Check for secrets before committing
npm run check-secrets

# Check n8n execution results
npm run check-exec -- <workflowId>

AI agents: see AGENTS.md for the n8nac workflow protocol.

LLM Compatibility

Provider Status
Claude (Anthropic) Works — reliable code generation + tool calling
GPT-4o (OpenAI) Works — excellent TypeScript generation
Gemini 2.0 Flash (Google) Partial — code generation works, MCP tool calling broken

License

MIT


Built by @mj-deving

Releases

No releases published

Packages

 
 
 

Contributors