Skip to content

My playground for exploring responsible AI. This repo is a centralized AI toolkit exploring ideas like governance, open source, accountability, dev experience, standardization and best practices.

License

Notifications You must be signed in to change notification settings

codewizwit/human-in-the-loop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🌭 Human in the Loop

npm version npm downloads License: MIT

Open sourced repository with governance layer and CLI for responsible AI tools

A developer-friendly toolkit for exploring governance of AI prompts, agents, context packs, and evaluators with standardization, quality control, and visibility.


Why This Exists

AI productivity tools are transforming how we build software, but without centralized governance, we risk inconsistency, duplication, and quality drift. Human in the Loop solves this by creating a single source of truth for all AI tooling.

The problem:

  • Prompts and agents scattered across repos, Slack threads, and local files
  • No standardization means teams reinvent the wheel with different patterns
  • Zero visibility into what works, what's adopted, or what needs improvement
  • Quality concerns with no review process for AI tools

The solution:

  • Single repository where all AI productivity tools are discoverable, versioned, and governed
  • Standardized contribution workflow that ensures quality and consistency
  • Clear metrics on tool adoption and effectiveness
  • Developer-friendly CLI that makes it easy to find, install, and use AI tools

Developer-First AI

AI tools should enhance developers, not replace them. Every tool in this repository is evaluated through our Developer-First AI Accountability Framework to ensure we're building technology that makes everyone better off - developers, teams, and organizations.

Core Principles

We believe AI productivity tools should:

  • ✨ Enhance developer happiness and creativity, not replace judgment
  • πŸ“š Support learning and growth, not create dependency
  • 🀝 Strengthen collaboration and trust, not erode human connection
  • πŸ” Maintain transparency and control, not obscure decision-making

Every prompt, agent, and workflow is designed with these principles in mind. When you use tools from this library, you're not just getting automation - you're getting carefully considered solutions that preserve what makes software development fulfilling while removing tedious friction.

Read the full framework: ACCOUNTABILITY.md


Quick Start

Install the CLI:

npm install -g @human-in-the-loop/cli

Or use npx:

npx hit --version

Search for tools:

hit search "code review"

Output:

πŸ” Searching for: "code review"

Found 2 tools:

1. prompt/code-review-ts
   TypeScript code review with best practices
   Version: 1.2.0
   Tags: typescript, code-review

2. prompt/code-review-security
   Security-focused code review
   Version: 2.0.0
   Tags: security, code-review

πŸ’‘ Tip: Use hit install <type>/<id> to install a tool

Install a prompt:

hit install prompt/code-review-ts

The CLI prompts for installation location (or use --path for non-interactive):

πŸ“¦ Installing prompt/code-review-ts...

  β†’ Looking up tool...
  β†’ Copying tool files...
  β†’ Registering installation...

βœ“ Successfully installed Code Review TypeScript (v1.2.0)
  β†’ Installed to: ~/.claude/tools/prompt/code-review-ts

What's Inside

πŸ“š Prompt Library (/lib/prompts)

Production-ready prompts organized by use case, versioned and quality-assured. Each prompt includes metadata, usage examples, and expected outputs.

πŸ€– Agent Registry (/lib/agents)

Catalog of AI agents with their configurations, capabilities, and integration guides. (Coming soon - framework in place)

🎯 Context Packs (/lib/context-packs)

Framework-specific knowledge bases that provide agents with deep technical context:

  • Angular: Component patterns, routing, state management, testing βœ…
  • NestJS: Module structure, dependency injection, middleware (coming soon)
  • CI/CD: Pipeline patterns, deployment strategies (coming soon)

βœ… Evaluators (/lib/evaluators)

Quality assurance tools that validate AI outputs against defined criteria. (Coming soon - framework in place)

πŸ›‘οΈ Guardrails (/lib/guardrails)

Safety mechanisms that enforce responsible AI usage. (Coming soon - framework in place)

⚑ CLI Tool (/src/cli)

Developer-friendly command-line interface for discovering, installing, and managing AI tools.

πŸ›‘οΈ Governance (/src/governance)

Contribution validation and quality assurance tooling that ensures all contributions meet project standards.

🀝 Accountability Framework (/ACCOUNTABILITY.md)

Developer-first principles and practices that guide how we build, evaluate, and deploy AI tools responsibly - ensuring they enhance developers instead of replacing them.


Repository Structure

human-in-the-loop/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ cli/                     # TypeScript CLI tool
β”‚   └── governance/              # Quality validation and checks
β”‚       └── checks/              # Validation scripts
β”œβ”€β”€ lib/
β”‚   β”œβ”€β”€ prompts/                 # Shared prompt library
β”‚   β”œβ”€β”€ agents/                  # Agent definitions and configs
β”‚   β”œβ”€β”€ evaluators/              # Quality evaluation tools
β”‚   β”œβ”€β”€ guardrails/              # Safety and governance rules
β”‚   └── context-packs/           # Framework-specific context
β”‚       β”œβ”€β”€ angular/             # Angular-specific context
β”‚       β”œβ”€β”€ nestjs/              # NestJS-specific context (coming soon)
β”‚       └── ci-cd/               # CI/CD patterns (coming soon)
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ build/                   # Build-time automation
β”‚   └── setup/                   # One-time setup scripts
β”œβ”€β”€ planning/                    # Project planning and roadmap
└── docs/
    β”œβ”€β”€ getting-started.md       # Installation and first steps
    β”œβ”€β”€ ai-best-practices.md     # Responsible AI usage guidelines
    β”œβ”€β”€ toolkit-usage.md         # Using prompts, agents, evaluators
    β”œβ”€β”€ contributing-guidelines.md # Detailed contribution workflow
    β”œβ”€β”€ governance-model.md      # Quality and review process
    └── architecture.md          # System design overview

CLI Commands

# Search for tools (prompts, agents, etc.)
hit search [query]

# Install a tool (interactive or with --path)
hit install <tool> [--path <path>]

# List all installed tools
hit list

# Validate local setup
hit doctor

# Validate and submit a new tool (creates GitHub issue)
hit contribute <type> <path>

# View usage analytics
hit stats

New in v1.0.11: The contribute command now automatically validates your contribution and creates a GitHub issue with detailed feedback!

For complete CLI documentation, see CLI Reference.


Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

Quick Contribution Workflow

# 1. Make changes with conventional commits
pnpm commit
# Interactive prompt enforces: feat:, fix:, docs:, etc.

# 2. Submit contribution (validates and creates GitHub issue)
hit contribute <type> <path>

# 3. Create PR once validation passes

Conventional Commits:

  • feat: - New features (minor version bump)
  • fix: - Bug fixes (patch version bump)
  • docs: - Documentation changes
  • refactor: - Code refactoring
  • test: - Test updates

All code must include TypeDoc comments above functions (no inline comments) and follow TypeScript strict mode.


Documentation


Technology Stack

  • Build System: Nx monorepo
  • Language: TypeScript (strict mode)
  • Package Manager: pnpm
  • CLI Framework: Commander.js
  • Prompts: Inquirer.js
  • Styling: Chalk
  • YAML Parsing: yaml
  • Testing: Jest

License

MIT License - see LICENSE for details


About

Human-in-the-Loop with β™₯ by codewizwit Build with care. Ship with purpose.

For issues and feature requests, please use GitHub Issues.

About

My playground for exploring responsible AI. This repo is a centralized AI toolkit exploring ideas like governance, open source, accountability, dev experience, standardization and best practices.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •