LLM Behavioral Enforcement Framework
7 mechanically enforced rules that prevent the most common failure modes when LLMs have access to real infrastructure: context loss, silent failures, file damage, goal drift, and overreach.
Built by Arthur Palyan at Palyan AI. 11 tools. Battle-tested on a 12-member AI family running 22 processes 24/7 on a single VPS. 55+ violations logged, 0 bypassed.
When you give an LLM access to your file system, bash, and production infrastructure, it will eventually:
- Edit a file it shouldn't touch
- Lose context between sessions and start over
- Drift from the original objective during long tasks
- Fail silently when a session times out
- Make logic changes without asking
- Disappear into debug loops
The Nervous System solves all of these with rules enforced by external mechanisms the LLM cannot override.
Add to your claude_desktop_config.json:
{
"mcpServers": {
"nervous-system": {
"command": "npx",
"args": ["-y", "mcp-nervous-system"]
}
}
}claude mcp add nervous-system npx mcp-nervous-systemnpx mcp-nervous-systemServer starts on port 3475 with SSE, HTTP, and health endpoints.
The server is live and ready to use:
URL: https://api.100levelup.com/mcp-ns/
Protocol: MCP 2024-11-05 (Streamable HTTP + SSE)
Authentication: None required
| # | Rule | What It Prevents |
|---|---|---|
| 1 | Dispatch Don't Do | Debug loops, rabbit holes. Tasks > 2 messages get dispatched. |
| 2 | Untouchable | File damage. Protected files mechanically blocked from editing. |
| 3 | Write Progress | Silent failures. Progress noted before each action. |
| 4 | Step Back Every 4 | Goal drift. Forced reflection every 4 messages. |
| 5 | Delegate and Return | Invisible work. Background tasks reported immediately. |
| 6 | Ask Before Touching | Unauthorized changes. Logic changes need human approval. |
| 7 | Hand Off | Context loss. Written handoffs every 3-4 exchanges. |
| Tool | Description |
|---|---|
get_framework |
Complete framework: all rules, permission protocol, enforcement patterns |
guardrail_rules |
The 7 core rules with triggers, enforcement, and failure modes |
preflight_check |
File protection system: shell script blocks edits to protected files |
session_handoff |
Context preservation: templates for handoff documents |
worklog |
Progress documentation pattern |
violation_logging |
Audit trail: timestamp, type, context for every violation |
step_back_check |
Forced reflection system |
get_nervous_system_info |
System overview and operational stats |
emergency_kill_switch |
Emergency shutdown of all PM2 processes. Requires kill switch secret. Logs to tamper-evident audit trail |
verify_audit_chain |
Walks the SHA-256 hash-chained audit log and verifies every entry. Returns chain integrity status |
dispatch_to_llm |
Spawns a background LLM agent to handle a task. Checks RAM, enforces max 2 concurrent dispatches |
The emergency_kill_switch tool provides an emergency shutdown capability. Send a POST request to /kill with the kill switch secret to immediately stop all PM2 processes. Every activation is logged to the tamper-evident audit trail with SHA-256 hash chaining, so kill switch events cannot be hidden or altered after the fact.
- Requires authentication (kill switch secret)
- Logs to hash-chained audit trail
- Returns confirmation with affected process count
Every guardrail violation, kill switch activation, and dispatch event is recorded in a SHA-256 hash-chained audit log. Each entry includes the hash of the previous entry, making it cryptographically impossible to alter or delete past records without breaking the chain.
- Use
verify_audit_chainto walk the entire chain and verify integrity - Returns: valid/invalid status, entry count, and break point if tampered
- 55+ violations logged, 0 bypassed, 0 chain breaks
The dispatch_to_llm tool enables a brain + agents architecture. Instead of one LLM session doing everything, complex tasks get dispatched to background agents that run independently under the same 7 rules.
- Checks available RAM (requires 500MB+)
- Enforces max 2 concurrent dispatches
- Returns PID and log file path for monitoring
- Every dispatched agent runs under the same nervous system guardrails
The Nervous System provides practical compliance tools for the EU AI Act. See the full compliance page at:
https://api.100levelup.com/family/eu-ai-act.html
nervous-system://framework- The complete frameworknervous-system://quick-start- Quick start guidenervous-system://rules- The 7 core rulesnervous-system://templates- Templates for handoffs, worklogs, preflight
From the live Palyan AI deployment (Feb 28 - Mar 5, 2026):
- 55+ violations caught
- 29 edits blocked by preflight
- 13 unique files protected
- 0 rules bypassed
- 22 processes monitored
- 7 days continuous operation
Try it yourself (no login required):
- Interactive Demo - Talk to a governed LLM and try to break the rules
- Audit Dashboard - See real violation history with timeline
- System Status - Live health checks
- API Documentation - Full tool and resource reference
- Case Study - Production deployment data
- Plain English Rules - For non-technical stakeholders
- Incident Response - Detection, containment, resolution
- EU AI Act Compliance - Practical EU AI Act compliance tools
"LLMs can't reliably self-enforce promises. Guardrails work via preflight.sh, violation logs, and catching drift. Build enforcement systems, don't make promises."
If a guardrail can be violated by the thing it guards, it is not a guardrail. It is a suggestion.
Every rule in the Nervous System is enforced by an external mechanism: a shell script, a timer, a separate monitoring process. The LLM cannot override, circumvent, or ignore them.
MIT