Skip to content

MCP tool for automatically/deterministically maintaining your codebase like a forensics lab run by an ML data engineer while you sip coffee. Just make this the only option your agent has to make changes to the codebase/FS, and a deterministic pipeline (with a simple HitL step) takes care of the rest. A simple tool call (2 fields) is all it takes.

License

Notifications You must be signed in to change notification settings

1withall/domin8

Repository files navigation

domin8

High-level goal: Provide a deterministic, auditable tool that constrains AI co-developing agents and requires them to record meaningful, auditable intent and justification before making destructive or intrusive changes to a codebase.


🚀 Quick Start

Installation

git clone <repository-url>
cd domin8
uv sync
domin8 --version

Using with Continue (VS Code)

domin8 integrates seamlessly with the Continue AI assistant:

Run the MCP server locally using the repository entrypoint:

# StdIO mode (default)
uv run python main.py

# Or start a TCP server bound to localhost only
uv run python main.py --tcp-port 5689
# 1. Configure Continue to use domin8 MCP server
cp .continue/mcp-config.json ~/.continue/config.json
# Edit the file to set your domin8 path

# 2. Approvals

All changes require your approval before execution — approvals are performed interactively via Continue (MCP elicitation) or via chat-based workflows with the agent. There is no separate web UI or artifact sidebar for approvals in this distribution; approvals and any optional post-decline feedback are collected only in the elicitation/chat flow.

📖 Full Continue Integration Guide →


Interactions and approvals are handled via MCP elicitation and chat-based flows. There is no CLI or Web UI in this distribution.

Project overview 🔍

domin8 is a Model Context Protocol (MCP) server that provides highly-structured pipelines for AI co-developing agents to request, justify, and execute potentially destructive actions (create, edit, delete, rename, move files, etc.) while ensuring auditability and capturing intent data for future training and optimization.

Key design goals:

  • Force agents to provide the WHAT and WHY of what they want to do before any destructive action is performed.
  • Persist intent, context, and the decision alongside a full trace of the action for later review and training.
  • Be deterministic, meticulously organized and transparent so that human reviewers can reproduce, review, and validate agent decisions.
  • Provide chat-based human-in-the-loop approval workflows (no CLI or Web UI).

Features ✨

🔐 Cryptographic Signatures

  • HMAC-SHA256 signatures for non-repudiation
  • Secure key management
  • Verifiable approvals

Optimization & Resource Detection

  • Automatic CPU resource detection and process-parallel indexing
  • Optional GPU acceleration for ML-based heuristics when a suitable GPU and drivers are present
  • Repository-level tuning via config/optimization.json or config/optimization.toml (see docs/OPTIMIZATION.md)

🗄️ SQLite Indexing

  • Fast artifact searches
  • Automatic indexing on operations
  • No filesystem scanning needed

🪝 Pre-commit Hooks & CI/CD

  • Validates programmatic commits
  • GitHub Actions workflow included
  • Prevents invalid artifact commits

🛠️ MCP Tool: request_change

  • request_change - Generic change request that accepts a machine-validated summary and a unified diff. This single-tool contract replaces prior specialized tools and centralizes validation, explainability, and artifact creation in a single deterministic pipeline.

Usage & core concepts 🔧

The conceptual workflow this project aims to provide is:

  1. Agent calls a tool provided by this MCP server to make a potentially destructive change of some kind to the codebase, localhost file system, or remote file system. These tools are the ONLY methods by which the agent can perform potentially destructive tasks (by removing all other options), and they all MUST treat the agent as an unwilling participant.
  2. Agent must provide a structured payload along with the tool call which contains information such as:
    • the precise change(s) requested;
    • the reason(s) for the requested change(s);
    • any other information that is relevant to the task at hand AND can not be generated or obtained deterministically with code This payload will be in the form of a Pydantic-validated schema which is heavily constrained in order to force the agent's compliance with the objectives of this tool. It MUST be assumed that the agent is NOT willing to provide the information being sought from it. The schemas MUST take every measure possible to extract meaningful responses from the agent - whether the agent wants to provide that data or not. This should be thought of as a HOSTILE INTERROGATION of the agent by a meticulous, by-the-book bureaucrat.
  3. A deterministic server-side pipeline validates the payload with Pydantic, and rejects failures for immediate correction(s) by the agent before the schema will be executed.
  4. A mandatory human-in-the-loop call presents the agent's response for human approval. Again - this pipeline assumes that the agent is going to try to cheat its way out, and the human should be the final arbiter of whether or not the agent provided meaningful responses. The human should be able to provide feedback to the agent if the attempt is not approved so that the agent can try again, taking the user's feedback into consideration. Subsequent attempts will still require human approval.
  5. Once approved, the pipeline releases the agent, executes the requested action(s), and persists all documents generated throughout the process to a meticulously-organized, tamper-evident, .gitignored store in the local repo, then stages and commits the changes with a data- and context-rich commit message sythesized programatically from the data generated/collected throughout the process.

All artifacts are:

  • dynamically/automatically assigned semantically-meaningful UUIDs
  • timestamped automatically using the localhost system's timezone
  • populated with any/all other relevant metadata it is possible to generate/collect programatically
  • stored in per-file sub-directories of a repo "mirror directory" (~/.domin8/agent_data/ with the same directories as the repo root, except all repo files are represented by their own sub-directory where all data about the corresponding file is kept, such as ~/.domin8/agent_data/README.md/, which would contain information about every change ever made to README.md is a logically-structured, meticulously-organized manner that is populated automatically by deterministic code)(obviously, this directory MUST be excluded from the mirror directory, to avoid infinite looping)
  • versioned and retained for training, QA, post-hoc analysis, optimization, etc.

About

MCP tool for automatically/deterministically maintaining your codebase like a forensics lab run by an ML data engineer while you sip coffee. Just make this the only option your agent has to make changes to the codebase/FS, and a deterministic pipeline (with a simple HitL step) takes care of the rest. A simple tool call (2 fields) is all it takes.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published