Open sourced repository with governance layer and CLI for responsible AI tools
A developer-friendly toolkit for exploring governance of AI prompts, agents, context packs, and evaluators with standardization, quality control, and visibility.
AI productivity tools are transforming how we build software, but without centralized governance, we risk inconsistency, duplication, and quality drift. Human in the Loop solves this by creating a single source of truth for all AI tooling.
The problem:
- Prompts and agents scattered across repos, Slack threads, and local files
- No standardization means teams reinvent the wheel with different patterns
- Zero visibility into what works, what's adopted, or what needs improvement
- Quality concerns with no review process for AI tools
The solution:
- Single repository where all AI productivity tools are discoverable, versioned, and governed
- Standardized contribution workflow that ensures quality and consistency
- Clear metrics on tool adoption and effectiveness
- Developer-friendly CLI that makes it easy to find, install, and use AI tools
AI tools should enhance developers, not replace them. Every tool in this repository is evaluated through our Developer-First AI Accountability Framework to ensure we're building technology that makes everyone better off - developers, teams, and organizations.
We believe AI productivity tools should:
- β¨ Enhance developer happiness and creativity, not replace judgment
- π Support learning and growth, not create dependency
- π€ Strengthen collaboration and trust, not erode human connection
- π Maintain transparency and control, not obscure decision-making
Every prompt, agent, and workflow is designed with these principles in mind. When you use tools from this library, you're not just getting automation - you're getting carefully considered solutions that preserve what makes software development fulfilling while removing tedious friction.
Read the full framework: ACCOUNTABILITY.md
Install the CLI:
npm install -g @human-in-the-loop/cli
Or use npx:
npx hit --version
Search for tools:
hit search "code review"
Output:
π Searching for: "code review"
Found 2 tools:
1. prompt/code-review-ts
TypeScript code review with best practices
Version: 1.2.0
Tags: typescript, code-review
2. prompt/code-review-security
Security-focused code review
Version: 2.0.0
Tags: security, code-review
π‘ Tip: Use hit install <type>/<id> to install a tool
Install a prompt:
hit install prompt/code-review-ts
The CLI prompts for installation location (or use --path
for non-interactive):
π¦ Installing prompt/code-review-ts...
β Looking up tool...
β Copying tool files...
β Registering installation...
β Successfully installed Code Review TypeScript (v1.2.0)
β Installed to: ~/.claude/tools/prompt/code-review-ts
Production-ready prompts organized by use case, versioned and quality-assured. Each prompt includes metadata, usage examples, and expected outputs.
Catalog of AI agents with their configurations, capabilities, and integration guides. (Coming soon - framework in place)
Framework-specific knowledge bases that provide agents with deep technical context:
- Angular: Component patterns, routing, state management, testing β
- NestJS: Module structure, dependency injection, middleware (coming soon)
- CI/CD: Pipeline patterns, deployment strategies (coming soon)
Quality assurance tools that validate AI outputs against defined criteria. (Coming soon - framework in place)
Safety mechanisms that enforce responsible AI usage. (Coming soon - framework in place)
Developer-friendly command-line interface for discovering, installing, and managing AI tools.
Contribution validation and quality assurance tooling that ensures all contributions meet project standards.
Developer-first principles and practices that guide how we build, evaluate, and deploy AI tools responsibly - ensuring they enhance developers instead of replacing them.
human-in-the-loop/
βββ src/
β βββ cli/ # TypeScript CLI tool
β βββ governance/ # Quality validation and checks
β βββ checks/ # Validation scripts
βββ lib/
β βββ prompts/ # Shared prompt library
β βββ agents/ # Agent definitions and configs
β βββ evaluators/ # Quality evaluation tools
β βββ guardrails/ # Safety and governance rules
β βββ context-packs/ # Framework-specific context
β βββ angular/ # Angular-specific context
β βββ nestjs/ # NestJS-specific context (coming soon)
β βββ ci-cd/ # CI/CD patterns (coming soon)
βββ scripts/
β βββ build/ # Build-time automation
β βββ setup/ # One-time setup scripts
βββ planning/ # Project planning and roadmap
βββ docs/
βββ getting-started.md # Installation and first steps
βββ ai-best-practices.md # Responsible AI usage guidelines
βββ toolkit-usage.md # Using prompts, agents, evaluators
βββ contributing-guidelines.md # Detailed contribution workflow
βββ governance-model.md # Quality and review process
βββ architecture.md # System design overview
# Search for tools (prompts, agents, etc.)
hit search [query]
# Install a tool (interactive or with --path)
hit install <tool> [--path <path>]
# List all installed tools
hit list
# Validate local setup
hit doctor
# Validate and submit a new tool (creates GitHub issue)
hit contribute <type> <path>
# View usage analytics
hit stats
New in v1.0.11: The contribute
command now automatically validates your contribution and creates a GitHub issue with detailed feedback!
For complete CLI documentation, see CLI Reference.
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
# 1. Make changes with conventional commits
pnpm commit
# Interactive prompt enforces: feat:, fix:, docs:, etc.
# 2. Submit contribution (validates and creates GitHub issue)
hit contribute <type> <path>
# 3. Create PR once validation passes
Conventional Commits:
feat:
- New features (minor version bump)fix:
- Bug fixes (patch version bump)docs:
- Documentation changesrefactor:
- Code refactoringtest:
- Test updates
All code must include TypeDoc comments above functions (no inline comments) and follow TypeScript strict mode.
- Getting Started - Installation, setup, and your first prompt
- Accountability Framework - Developer-first AI principles and responsible usage
- AI Best Practices - Responsible AI usage and prompt engineering
- Toolkit Usage - Using prompts, agents, evaluators, and guardrails
- Contributing Guidelines - Detailed contribution workflow
- Governance Model - Quality review and release process
- Architecture - System design and technical overview
- Build System: Nx monorepo
- Language: TypeScript (strict mode)
- Package Manager: pnpm
- CLI Framework: Commander.js
- Prompts: Inquirer.js
- Styling: Chalk
- YAML Parsing: yaml
- Testing: Jest
MIT License - see LICENSE for details
Human-in-the-Loop with β₯ by codewizwit Build with care. Ship with purpose.
For issues and feature requests, please use GitHub Issues.