A modern, AI-powered project management CLI that brings intelligent assistance directly to your terminal. Built with Ink for a polished terminal UI, LLPM combines natural language interaction with structured project management, offering seamless GitHub integration and persistent workspace configuration.
Perfect for developers who want to organize multiple projects, interact with GitHub repositories, and leverage AI assistance without leaving the command line.
npm install -g @britt/llpmAfter installation, run llpm to start the CLI.
Bun blocks postinstall scripts from packages not in its trusted list. To install with Bun, first add @britt/llpm to your global ~/.bunfig.toml:
[install]
trustedDependencies = ["@britt/llpm"]Then install:
bun install -g @britt/llpmgit clone https://github.com/britt/llpm.git
cd llpm
bun install
bun run start- Node.js 18+ or Bun runtime
- API key for at least one AI provider (OpenAI, Anthropic, Groq, or Google Vertex AI)
- GitHub token (optional, for GitHub integration features)
Full documentation available at: https://britt.github.io/llpm/
- 🤖 Chat with AI assistant (GPT-5.2 by default) with multi-provider support
- 🔄 Dynamic model switching between OpenAI, Anthropic, Groq, Google Vertex AI, and Cerebras
- 💬 Interactive terminal chat interface with real-time input handling
- 🎨 Clean, styled terminal UI built with Ink
- 📝 Persistent chat history per session
- ⚙️ Configurable system prompts stored in user config directory
- 📁 Multi-project support with automatic configuration persistence
- 🔄 New projects become the active project on creation
- 🔄 Easy project switching and management
- 📂 GitHub repository integration for project setup
- 🛠️ LLM tools for natural language project management
- 💾 Configuration stored in
~/.llpm/
- 🔍 Browse and search your GitHub repositories
- 📋 List repositories with filtering and sorting options
- 🎯 Repository selection for new projects
- 🔗 Direct integration with project management
- 🎓 Dynamic skill loading with automatic system prompt injection
- 📚 20 core skills included: Mermaid diagrams, stakeholder tracking, user story templates, and more
- 🔧 Custom skills support - Create your own reusable instruction sets
- 🤖 AI-aware: Skills are automatically listed in the system prompt with usage guidance
- 🔄 Hot reloading - Changes take effect immediately with
/skills reload
/info- Show application and current project information/help- Display all available commands/quitor/exit- Exit the application/clear- Start a new chat session/project- Manage projects (add, list, switch, remove)/project-scan- Scan and analyze project structure/github- Browse and search GitHub repositories/issue- Manage GitHub issues/model- Switch between AI models and view provider status/skills- Manage skills (list, test, enable, disable, reload)/notes- Manage project notes/stakeholder- Manage stakeholders and goals/history- View chat history/registry- View model registry information/debug- Show recent debug logs for troubleshooting
Configure API keys as environment variables. If running from source, copy .env.example to .env:
cp .env.example .env# AI Providers (configure at least one)
OPENAI_API_KEY=your-openai-api-key-here
ANTHROPIC_API_KEY=your-anthropic-api-key-here
GROQ_API_KEY=your-groq-api-key-here
CEREBRAS_API_KEY=your-cerebras-api-key-here
GOOGLE_VERTEX_PROJECT_ID=your-google-cloud-project-id
GOOGLE_VERTEX_REGION=us-central1 # Optional, defaults to us-central1
# Optional integrations
GITHUB_TOKEN=your-github-token-here # For GitHub features📖 For detailed provider configuration instructions, see Model Providers Documentation
bun run start # Start the CLI
bun run start:verbose # Start with debug loggingRun the setup wizard to configure credentials and create an initial project.
llpm setupTo re-run setup from a clean state, pass --force.
llpm setup --force- Start the application with
llpm(orbun run startif running from source) - Type your message and press Enter to chat with the AI
- Use slash commands (e.g.,
/help) for specific functions - Use Ctrl+C to exit
Markdown rendering is enabled only when stdout is a TTY and neither NO_COLOR nor CI=true is set.
-
Set up your first project:
/project add "My App" "https://github.com/user/my-app" "/path/to/project"The newly created project becomes the active project.
-
Or browse GitHub repositories:
/github list # Then use the AI: "Add this repository as a new project" -
Switch between projects:
/project switch project-id # or /project switch # to see available projects to switch to /project list # to list all available projects with details -
Natural language project management:
- "What's my current project?"
- "List all my projects"
- "Switch to my web app project"
- "Add my latest GitHub repo as a new project"
- "Remove the old project"
-
Switch between AI models:
/model switch # Interactive model selector /model switch openai/gpt-4o # Direct model switch /model list # Show available models /model providers # Check provider configuration
You can use either approach:
Slash commands (direct, immediate):
/info- Quick system information/project list- List all projects/project switch- Switch between projects/github search typescript- Search repositories
Natural language (AI-powered, flexible):
- "Show me information about the current setup"
- "What projects do I have available?"
- "Find TypeScript repositories on GitHub"
- "Add my latest repository as a new project called 'Web Dashboard'"
bun run testbun start- Start the CLI applicationbun start:verbose- Start with debug logging enabledbun run dev- Same as start (development mode)bun run dev:verbose- Development mode with debug loggingbun run test- Run test suitebun run test:watch- Run tests in watch modebun run test:ui- Run tests with UIbun run test:coverage- Run tests with coverage reportbun run typecheck- Run TypeScript type checkingbun run lint- Run ESLintbun run format- Format code with Prettier
Enable verbose debug logging to troubleshoot issues:
# Using npm scripts
bun start:verbose
# Using flags directly
bun run index.ts --verbose
./index.ts -vDebug logs include:
- Environment validation steps
- Chat message flow
- API call details and responses
- Loading state changes
- Error details
LLPM includes OpenTelemetry support for distributed tracing with Jaeger. This enables comprehensive visibility into:
- User request flows with hierarchical flame graphs
- LLM interactions with token usage and tool calls
- File system operations (config loading, chat history, system prompts)
- Network operations (GitHub API calls)
- Individual tool executions
Quick Start:
# Start Jaeger
cd docker
docker-compose up -d jaeger
# Enable telemetry (enabled by default)
export LLPM_TELEMETRY_ENABLED=1
# Run LLPM with verbose logging to see trace initialization
bun run index.ts --verbose
# View traces at http://localhost:16686📖 For detailed telemetry setup and usage, see TELEMETRY.md
LLPM stores configuration in ~/.llpm/:
config.json- Project configurations and current projectchat-sessions/- Persistent chat history by sessionsystem_prompt.txt- Custom system prompt (automatically created on first run)skills/- Core skills and custom skills (automatically installed on first run)
LLPM automatically creates a default system prompt file on first run. You can customize the AI assistant's behavior by editing this file:
# View the current system prompt
cat ~/.llpm/system_prompt.txt
# Edit the system prompt with your preferred editor
nano ~/.llpm/system_prompt.txt
# or
code ~/.llpm/system_prompt.txtKey features:
- Automatic creation: Default prompt is copied to
~/.llpm/system_prompt.txton first install/run - Idempotent: Existing customizations are preserved during updates
- Real-time loading: Changes take effect on next chat session start
- Fallback: If the file is corrupted or missing, LLPM falls back to the built-in default
The default system prompt focuses on project management, GitHub integration, and provides access to all available tools and commands.
LLPM implements the Agent Skills standard, allowing you to create reusable instruction sets that work across multiple AI coding assistants including Claude Code, Cursor, and other compatible tools.
Skills are automatically injected into the system prompt, making the AI aware of when and how to use them. Because LLPM uses the Agent Skills standard, you can share skills between tools or use community skill packs like Superpowers which provides battle-tested workflows for TDD, debugging, code review, and more.
LLPM comes with 20 core skills installed by default in ~/.llpm/skills/:
| Skill | Description |
|---|---|
| architecture-diagramming | Create architecture diagrams for projects |
| at-risk-detection | Detect at-risk items in projects and issues |
| build-faq-from-issues | Generate FAQ documents from GitHub issues |
| consolidate-notes-summary | Consolidate and summarize project notes |
| context-aware-questions | Generate context-aware clarifying questions |
| dependency-mapping | Map project dependencies and relationships |
| issue-decomposition | Decompose large issues into smaller tasks |
| markdown-formatting | Best practices for markdown document formatting |
| mermaid-diagrams | Create syntactically correct Mermaid diagrams for GitHub |
| prepare-meeting-agenda | Structure effective meeting agendas |
| project-planning | Guide project planning and milestone creation |
| requirement-elicitation | Elicit and refine project requirements |
| research-topic-summarize | Summarize research on technical topics |
| stakeholder-tracking | Track stakeholders and their goals |
| stakeholder-updates | Craft clear stakeholder communications |
| summarize-conversation-thread | Summarize long conversation threads |
| timeline-planning | Plan project timelines and schedules |
| triage-new-issues | Triage and categorize new GitHub issues |
| user | General user interaction skill |
| user-story-template | Write well-formed user stories with acceptance criteria |
Skills are automatically listed in the system prompt, showing the AI when to load them:
You: "I need to create a sequence diagram for the authentication flow"
AI: I'll load the mermaid-diagrams skill to help create a syntactically correct diagram.
[Uses load_skills tool]
AI Tools for Skills:
load_skills- Load one or more skills to augment contextlist_available_skills- Discover available skills with optional tag filtering
Slash Commands:
/skills list # List all discovered skills and their status
/skills test <name> # Preview a skill's content and settings
/skills enable <name> # Enable a skill
/skills disable <name> # Disable a skill
/skills reload # Rescan skill directories and reload all skills
/skills reinstall # Reinstall core skills from bundled directoryCreate your own skills in ~/.llpm/skills/ (personal) or .skills/ (project-specific):
# ~/.llpm/skills/my-skill/SKILL.md
---
name: my-skill
description: "Brief description of what this skill does"
instructions: "When [condition], [action]"
tags:
- tag1
- tag2
allowed_tools:
- tool1
- tool2
---
# My Skill Instructions
Your markdown instructions here...Frontmatter Fields:
name(required): Unique skill identifier (lowercase, hyphens only)description(required): What the skill does (max 1024 chars)instructions(optional): Single-line guidance on when to use this skilltags(optional): Array of tags for filtering and discoveryallowed_tools(optional): Restrict tool usage when skill is activevars(optional): Variables for content substitutionresources(optional): Additional files to load
Skill Locations:
~/.llpm/skills/- Personal skills (shared across all projects).skills/orskills/- Project-specific skills (not shared)
How Skills Work:
- Skills are scanned on startup and when
/skills reloadis called - All enabled skills with instructions are injected into the system prompt
- The AI sees when to load each skill based on the
instructionsfield - When loaded via
load_skillstool, the skill's full content is added to context - Loaded skills can optionally restrict tool usage to an allowed list
Example Custom Skill:
# ~/.llpm/skills/api-design/SKILL.md
---
name: api-design
description: "Guide for designing RESTful API endpoints following best practices"
instructions: "When designing APIs, creating endpoints, or reviewing API specifications"
tags:
- api
- rest
- design
allowed_tools:
- github
- notes
---
# API Design Skill
## RESTful Principles
- Use nouns for resources (not verbs)
- HTTP methods: GET (read), POST (create), PUT/PATCH (update), DELETE (remove)
...- Components: Terminal UI components built with Ink (
src/components/) - Hooks: React hooks for state management (
src/hooks/useChat.ts) - Services: LLM integration using Vercel AI SDK (
src/services/llm.ts) - Commands: Slash command system (
src/commands/) - Tools: LLM function calling tools (
src/tools/) - Utils: Configuration and utility functions (
src/utils/)
- Project Config: Persistent storage in
~/.llpm/config.json - Multi-project Support: Each project has ID, name, repository, and path
- GitHub Integration: Repository browsing and search capabilities
- LLM Tools: Function calling for natural language project operations
The AI assistant has access to 59 tools across these categories:
Project Management:
get_current_project,list_projects,add_project,set_current_project,remove_project,update_project
GitHub Integration:
list_github_repos,search_github_repos,get_github_repocreate_github_issue,list_github_issues,update_github_issue,comment_on_github_issue,search_github_issues,get_github_issue_with_commentslist_github_pull_requests,create_github_pull_request
Notes:
add_note,update_note,search_notes,list_notes,get_note,delete_note
Stakeholder Management:
add_stakeholder,list_stakeholders,get_stakeholder,update_stakeholder,remove_stakeholderlink_issue_to_goal,unlink_issue_from_goal,generate_coverage_report,resolve_conflict
Project Analysis:
scan_project,get_project_scan,list_project_scans,analyze_project_fullget_project_architecture,get_project_key_files,get_project_dependenciesanalyze_project_risks,analyze_issue_risks,get_at_risk_itemsgenerate_project_questions,generate_issue_questions,suggest_clarifications,identify_information_gaps
Filesystem:
read_project_file,list_project_directory,get_project_file_info,find_project_files
Web & Screenshots:
web_search,read_web_page,summarize_web_pagetake_screenshot,check_screenshot_setup
Shell & System:
run_shell_command,get_system_prompt,ask_user
Skills:
load_skills,list_available_skills
This project uses:
- Bun - JavaScript runtime and package manager
- Ink - React for CLI
- Vercel AI SDK - LLM integration
- Vitest - Testing framework