Teach AI assistants your codebase conventions — automatically.
One command. Your conventions. Every AI tool on your team.
Every time an AI assistant touches your code, it guesses at your conventions. Wrong quote style. Wrong test patterns. Wrong import order. You fix it, it forgets, you fix it again.
skillgen reads your actual codebase and generates convention files that Claude Code, Cursor, and other AI tools understand natively. No hand-writing rules. No guessing. Every line is backed by evidence from your code.
Output redesigned for maximum AI effectiveness based on expert panel review:
- Imperative rules — "Use snake_case for functions" instead of "87% use snake_case (13/15 files)"
- ALWAYS/NEVER/PREFERRED tiers — prioritized summary at top of every output file
- Anti-patterns — "Do NOT use camelCase" derived automatically from minority conventions
- Code snippets — one idiomatic example per category (naming, testing, imports, error handling, docs, logging)
- Language-aware snippets — generates Python, TypeScript, Go, Rust, Java, or C++ examples matching your stack
/plugin marketplace add mmoselhy/skillgen
/plugin install skillgen@skillgen-marketplace
That's it. Now you have two slash commands in every project:
/skillgen:skillgen Analyze codebase, generate .claude/skills/*.md
/skillgen:skillgen enrich Find community skills for your stack
No Python or pip required. The plugin runs entirely inside Claude Code — Claude reads your code and generates skill files directly. For even better results, also install the CLI (
pip install skillgen-ai) to enable hybrid mode (CLI stats + Claude semantics).
pip install skillgen-ai
skillgen .Generates .claude/skills/, .cursor/rules/, and AGENTS.md. Deterministic output, works offline, runs in CI.
Optional extras:
pip install skillgen-ai[tree-sitter] # AST-based analysis (more accurate)
pip install skillgen-ai[llm] # LLM-enhanced output (requires API key)Requires Python 3.11+.
skillgen scans your code and generates 8 convention categories:
| Category | What It Captures |
|---|---|
| Naming | snake_case vs camelCase, class suffixes, file naming patterns |
| Error Handling | Exception hierarchy, try/except patterns, error propagation style |
| Testing | Framework, fixtures, assertion style, mocking patterns |
| Imports | Absolute vs relative, grouping order, key dependencies |
| Documentation | Docstring format, coverage %, type annotation usage |
| Architecture | Directory layout, layering, where new code goes |
| Code Style | Line length, quotes, trailing commas, formatter config |
| Logging | Library, logger init pattern, structured vs unstructured |
Every convention is stated as an imperative rule ("Use snake_case for all functions") with real examples and code snippets from your codebase. No percentages, no generic advice — just clear instructions AI tools actually follow.
$ skillgen .
## ALWAYS
- Use snake_case for Functions
- Use PascalCase for Classes/types
- Use try/except with specific exception types
- Use absolute imports
## NEVER
- Do NOT use camelCase for function names
## Category Details
### Naming Conventions
- **Use snake_case for Functions**
- Examples: `analyze_project`, `detect_project`, `validate_input`
- **Use PascalCase for Classes/types**
- Examples: `Language`, `PatternCategory`, `OutputFormat`
### Example
```python
class Language:
"""A Language instance."""
def analyze_project(data: dict) -> None:
"""Process data."""
...
Done! 10 file(s) generated.
## CLI vs Plugin
| | CLI | `/skillgen` Plugin |
|---|---|---|
| **Install** | `pip install skillgen-ai` | `/plugin install skillgen@skillgen-marketplace` |
| **How it works** | Regex + tree-sitter AST | Claude reads your code |
| **Output style** | Imperative rules + code snippets | Semantic ("verb_noun pattern: `get_user`") |
| **Formats** | `.claude/` + `.cursor/` + `AGENTS.md` | `.claude/` only |
| **Speed** | < 1 second | 5-15 seconds |
| **Deterministic** | Yes | No (richer, but varies) |
| **Context cost** | Zero | ~1,700 lines (hybrid) / ~10K (standalone) |
| **Best for** | Teams, CI, multi-format | Individual devs, quick setup |
**Hybrid mode**: When both are installed, `/skillgen` uses CLI stats as the backbone and Claude adds semantic enrichment — best of both worlds.
## Output Formats
**Claude Code** `.claude/skills/project-conventions/SKILL.md` — single combined file with ALWAYS/NEVER/PREFERRED tiers and per-category details:
```markdown
<!-- Generated by skillgen v0.4.0. Regenerate with: skillgen . -->
## ALWAYS
- Use snake_case for Functions
- Use PascalCase for Classes/types
## NEVER
- Do NOT use camelCase for function names
## Category Details
### Code Style
- **Use double quotes**
### Formatters & Linters
- **ruff** -- line-length: 100, select: E, F, W, I, N, UP, B, SIM, RUF
- **mypy** -- python_version: 3.11, strict: true
Cursor .cursor/rules/*.mdc — same conventions, Cursor-native frontmatter.
AGENTS.md — single Markdown file at repo root. Uses <!-- skillgen:start/end --> markers so your handwritten sections are preserved.
After generating local conventions, pull in community-curated skills for your stack:
$ skillgen . --enrich
Found 3 community skills for Python + FastAPI:
# Skill Description
1 FastAPI Conventions Router patterns, DI, HTTPException
2 Pytest Best Practices Fixtures, parametrize, conftest
3 SQLAlchemy Patterns Session management, eager loading
Skipped (already covered locally): naming, code-style, imports
$ skillgen . --enrich --apply --pick 1,2207 community skills across Python, TypeScript, JavaScript, Go, Rust, and Java — sourced from Anthropic, GitHub Copilot, and awesome-cursorrules with trust tiers (official, community, contributed).
| Language | Frameworks Auto-Detected |
|---|---|
| Python | Django, FastAPI, Flask |
| TypeScript | Next.js, React, Angular, Vue |
| JavaScript | Express, React, Vue |
| Go | Gin, Cobra |
| Rust | Actix, Tokio |
| Java | Spring |
| C++ | — |
skillgen reads your tool configs (ruff, prettier, eslint, mypy, golangci-lint) and embeds the actual settings in generated skills.
path ──> DETECT ──> ANALYZE ──> SYNTHESIZE ──> GENERATE ──> WRITE
| | | | |
languages patterns conventions skills files
frameworks evidence prevalence confidence .claude/
per-file config values meters .cursor/
- Detect — scan file tree, read manifests, identify languages + frameworks
- Analyze — sample up to 50 files/language, extract patterns across 8 categories
- Synthesize — deduplicate, compute prevalence, parse config files
- Generate — render imperative rules, ALWAYS/NEVER tiers, code snippets, anti-patterns
- Write — atomic writes, orphan cleanup,
--dry-runsupport
skillgen <path> [flags]
Output:
--format, -f <claude|cursor|all> Target format (default: all)
--diff Show what AI learns vs blank slate
--dry-run Preview without writing files
--json Export analysis as JSON
--verbose, -v Detailed analysis output
--quiet, -q Errors only
Analysis:
--no-tree-sitter Disable AST parsing, use regex
--llm Enhance with Claude/GPT-4o
--llm-provider <anthropic|openai> Choose LLM provider
Community:
--enrich Find matching community skills
--enrich --apply Download and install them
--enrich --apply --pick 1,3 Cherry-pick by number
--trust <official|community|all> Filter by trust tier
--no-cache Force re-fetch of skill index
Yes. Commit .claude/skills/, .cursor/rules/, and AGENTS.md so every team member and CI job uses the same conventions. Regenerate with skillgen . whenever your codebase conventions evolve.
See docs/CONTRIBUTING.md for development setup, testing, and PR guidelines.