AI skills that find the design decisions you didn't make.
mindit is an open-source skills pack for AI design tools. It walks any design through eight forces every decision has to balance and produces a scored, structured artifact for each one.
Layers gives you a thinking framework. mindit gives you eight numbers, a CLI, and an MCP server.
npm install -g usemindit
usemindit add claude-codeWorks with Claude Code, Claude Desktop (via MCP), Cursor, Codex, Cline, and any tool that reads SKILL.md.
- CLI:
usemindit add,init,list,lint,mcp - MCP server:
usemindit mcpexposes all eleven skills as MCP tools over stdio - Decision-graph linter:
usemindit lintvalidates artifacts in.mindit/decisions/against the JSON schema, surfaces broken dependencies, catches orphan files - Multi-tool installer: One command installs into Claude Code, Cursor, Codex, or Cline
Every design decision balances eight competing pressures. mindit names them, scores them, and tells you which one is the weakest in any design.
| Force | The question it asks |
|---|---|
| Clarity | Can a stranger read the intent in five seconds? |
| Continuity | Does this fit the system that already exists? |
| Constraint | What hard limits is this honoring? |
| Consequence | What does this foreclose? What does it open? |
| Context | Who, when, where, on what device, in what state? |
| Cost | Build time, complexity, support burden, debt |
| Confidence | What evidence stands behind this? |
| Correctability | If we're wrong, how reversible is it? |
A finished design has a force profile. Lopsided profiles fail. A design at 90 Clarity and 20 Correctability hasn't been thought through, it's been polished.
Eight force skills, three meta skills:
| Skill | What it produces |
|---|---|
/mindit-clarity |
Hierarchy, labeling, affordance, density, distinguishability score |
/mindit-continuity |
Convention, component, naming, state, visual alignment score |
/mindit-constraint |
Accessibility, technical, legal, brand, platform, time-budget score |
/mindit-consequence |
Reversibility, foreclosure, data, lock-in, second-order score |
/mindit-context |
Audience, device, state, journey, environment score |
/mindit-cost |
Build, support, maintenance, debt, TCO score |
/mindit-confidence |
Evidence type, freshness, breadth, mechanism, triangulation score |
/mindit-correctability |
User undo, system rollback, detection, blast radius, gating score |
/mindit-audit |
All eight forces. Bottleneck identified. Top three findings. |
/mindit-premortem |
Imagine it failed. Surface the failure modes. Rank by severity. |
/mindit-trace |
Walk the decision graph backward and forward through .mindit/decisions/. |
$ usemindit --help
mindit — AI skills that find the design decisions you didn't make
Usage
usemindit <command> [options]
Commands
init Create a .mindit/decisions directory in the current project
add <tool> Install skills into an AI tool (claude-code, cursor, codex, cline, claude-desktop)
list List the eleven skills and their descriptions
lint [path] Validate decision artifacts in .mindit/decisions/
mcp Start the mindit MCP server on stdioCreates .mindit/decisions/ in your project with a README. This is where skill artifacts get written. Commit the directory; it's your decision graph.
Installs the eleven skills into wherever your AI tool reads them.
usemindit add claude-code # project-local: ./.claude/skills/
usemindit add claude-code --global # user-global: ~/.claude/skills/
usemindit add cursor # writes to ./.cursor/rules/
usemindit add codex # writes to ./.codex/skills/
usemindit add cline # writes to ./.clinerules/
usemindit add claude-desktop # prints the MCP config snippet
usemindit add --path ~/Downloads/x # copy to any directoryFlags: --global, --dry-run, --force, --path <dir>.
Prints the eleven skills with their descriptions. Use --verbose for full descriptions.
Validates .mindit/decisions/ against the schema. Checks performed:
- Every
.jsonparses - Every
.jsonvalidates againstdecision.schema.json - Every
.jsonhas a.mdsibling (and vice versa) - Every
idfield matches its filename - Every
depends_onreference resolves to an existing artifact
Exit codes: 0 clean · 1 errors · 2 warnings only.
Starts the MCP server on stdio. Twelve tools available: the eleven skills as mindit_clarity, mindit_continuity, etc., plus mindit_schema which returns the JSON Schema.
For Claude Desktop, add this to claude_desktop_config.json:
{
"mcpServers": {
"mindit": {
"command": "npx",
"args": ["usemindit", "mcp"]
}
}
}Every skill writes two files to .mindit/decisions/:
- A markdown artifact for humans to read.
- A JSON artifact that conforms to
schemas/decision.schema.json— queryable by tools, diff-able in git, traceable through the graph.
Example output from /mindit-clarity on a pricing page:
{
"id": "clarity-20260511-pricing-hero",
"skill": "mindit-clarity",
"subject": { "name": "Pricing page hero", "type": "screen", "location": "example.com/pricing" },
"force": "clarity",
"score": 51,
"criteria": [
{ "name": "Hierarchy", "score": 32, "weight": 0.25 },
{ "name": "Labeling", "score": 71, "weight": 0.20 },
{ "name": "Affordance", "score": 60, "weight": 0.20 },
{ "name": "Density", "score": 80, "weight": 0.15 },
{ "name": "Distinguishability", "score": 55, "weight": 0.10 },
{ "name": "First-glance test", "score": 25, "weight": 0.10 }
],
"findings": [
{
"severity": "high",
"anti_pattern": "buried-primary-action",
"summary": "Pricing tiers sit below the fold; hero is a stock photo.",
"fix": "Move the three-tier comparison above the fold."
}
],
"open_questions": [
"What audience segment is this hero designed for — first-time visitors or returning evaluators?"
],
"created_at": "2026-05-11T10:14:00Z",
"version": "2.0.0"
}# Global install (recommended for the CLI)
npm install -g usemindit
usemindit add claude-code
# Or local install
npm install usemindit
npx usemindit add claude-codeThen in any AI tool that supports the skills convention:
/mindit-clarity Review the hero on https://example.com/pricing
/mindit-audit Review my checkout flow. Audience is repeat customers on mobile.
/mindit-trace Why are we using three tiers instead of four?
Each force ships with a registry of named failure modes — anti-patterns.yaml in the force's directory. 64 named patterns total, each with a symptom, an example, a fix, and a severity. When a skill matches a pattern, the finding includes the slug. You can grep your decision graph for any pattern to see how often it appears.
A sample: buried-primary-action (Clarity), one-off-component (Continuity), accessibility-afterthought (Constraint), one-way-door (Consequence), desktop-prototype (Context), snowflake-feature (Cost), n-of-one (Confidence), big-bang-launch (Correctability).
usemindit/
├── README.md, DISCLAIMER.md, LICENSE
├── package.json
├── bin/
│ └── usemindit.js CLI entry point
├── src/
│ ├── cli/ init, add, list, lint, help
│ ├── mcp/server.js MCP server (stdio)
│ └── lib/ skills loader, targets registry, colors
├── schemas/
│ └── decision.schema.json
├── skills/
│ ├── mindit-clarity/ SKILL.md + anti-patterns.yaml
│ ├── mindit-continuity/ SKILL.md + anti-patterns.yaml
│ ├── mindit-constraint/ SKILL.md + anti-patterns.yaml
│ ├── mindit-consequence/ SKILL.md + anti-patterns.yaml
│ ├── mindit-context/ SKILL.md + anti-patterns.yaml
│ ├── mindit-cost/ SKILL.md + anti-patterns.yaml
│ ├── mindit-confidence/ SKILL.md + anti-patterns.yaml
│ ├── mindit-correctability/ SKILL.md + anti-patterns.yaml
│ ├── mindit-audit/ SKILL.md
│ ├── mindit-premortem/ SKILL.md
│ └── mindit-trace/ SKILL.md
├── test/
│ ├── schema-validation.test.js
│ ├── anti-pattern-fixtures.test.js
│ ├── cli.test.js
│ ├── lint.test.js
│ └── mcp.test.js
└── docs/
└── index.html landing page
npm install
npm test44 tests across schema validation, anti-pattern fixtures, CLI integration, lint behavior, and MCP server roundtrips.
Most design-thinking frameworks give you a stack of layers, a set of questions, or a vocabulary. They are valuable, and they are also stateless: you run them, get a markdown, lose context.
mindit makes four concrete additions:
- Scoring. Every force has a 0–100 score per artifact. Not for ranking designs, but for surfacing which force is weakest in a given design.
- Anti-pattern registry. Named failure modes per force. Findings carry slugs. Greppable.
- Decision graph. Artifacts reference each other via
depends_on.mindit-tracewalks the graph backward and forward. - Tooling. A CLI for installation and validation. An MCP server. A schema-based linter. Tests on every commit.
The skill files are plain markdown. The output schema is plain JSON. You can use mindit without buying into any platform.
MIT. See LICENSE.
Fork it. Adapt it. Use it inside your team. Build your own forces on top of it. No attribution required.
mindit is one lens, not the only lens. Scores are heuristics, not verdicts. A high score does not mean a design is good. A low score does not mean a design is bad. Always have a human read the output before acting on it. See DISCLAIMER.md.
Built by Dragoon0x.