Skip to content

cookyman74/agent_cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4,645 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Didim Agent CLI

CI E2E Version License

Didim Agent CLI Screenshot

Didim Agent CLI is an open-source AI agent that brings the power of multiple AI providers directly into your terminal. Built on the Gemini CLI foundation, it supports Gemini, Claude, OpenAI, and OpenAI-compatible (vLLM, Ollama, LM Studio) endpoints through a unified provider adapter architecture, giving you the most direct path from your prompt to your preferred model.

Learn all about Didim Agent CLI in our documentation.

🚀 Why Didim Agent CLI?

  • 🧠 Multi-provider support: Use Gemini, Claude, OpenAI, or local models (vLLM/Ollama) — switch providers and models with /model or /auth login.
  • 🔧 Built-in tools: Google Search grounding, file operations, shell commands, web fetching — all tools work across providers.
  • 🔌 Extensible: MCP (Model Context Protocol) support with deterministic tool naming and sLM-compatible parameter normalization.
  • 🤖 Sub-agent support: Sub-agents work with all providers via the provider-independent llm* pipeline.
  • 💻 Terminal-first: Designed for developers who live in the command line.
  • 🛡️ Open source: Apache 2.0 licensed.

📦 Installation

Pre-requisites before installation

  • Node.js version 20 or higher
  • macOS, Linux, or Windows

Quick Install

Run instantly with npx

# Using npx (no installation required)
npx @didim365/agent-cli

Install globally with npm

npm install -g @didim365/agent-cli

Install globally with Homebrew (macOS/Linux)

brew install gemini-cli

Install globally with MacPorts (macOS)

sudo port install gemini-cli

Install with Anaconda (for restricted environments)

# Create and activate a new environment
conda create -y -n gemini_env -c conda-forge nodejs
conda activate gemini_env

# Install Gemini CLI globally via npm (inside the environment)
npm install -g @didim365/agent-cli

Release Cadence and Tags

See Releases for more details.

Preview

New preview releases will be published each week at UTC 2359 on Tuesdays. These releases will not have been fully vetted and may contain regressions or other outstanding issues. Please help us test and install with preview tag.

npm install -g @didim365/agent-cli@preview

Stable

  • New stable releases will be published each week at UTC 2000 on Tuesdays, this will be the full promotion of last week's preview release + any bug fixes and validations. Use latest tag.
npm install -g @didim365/agent-cli@latest

Nightly

  • New releases will be published each day at UTC 0000. This will be all changes from the main branch as represented at time of release. It should be assumed there are pending validations and issues. Use nightly tag.
npm install -g @didim365/agent-cli@nightly

📋 Key Features

Code Understanding & Generation

  • Query and edit large codebases
  • Generate new apps from PDFs, images, or sketches using multimodal capabilities
  • Debug issues and troubleshoot with natural language

Automation & Integration

  • Automate operational tasks like querying pull requests or handling complex rebases
  • Use MCP servers to connect new capabilities, including media generation with Imagen, Veo or Lyria
  • Run non-interactively in scripts for workflow automation

Advanced Capabilities

  • Ground your queries with built-in Google Search for real-time information
  • Conversation checkpointing to save and resume complex sessions
  • Custom context files (AGENTS.md) to tailor behavior for your projects

GitHub Integration

Integrate Gemini CLI directly into your GitHub workflows with Gemini CLI GitHub Action:

  • Pull Request Reviews: Automated code review with contextual feedback and suggestions
  • Issue Triage: Automated labeling and prioritization of GitHub issues based on content analysis
  • On-demand Assistance: Mention @gemini-cli in issues and pull requests for help with debugging, explanations, or task delegation
  • Custom Workflows: Build automated, scheduled and on-demand workflows tailored to your team's needs

🔐 Authentication Options

Choose the authentication method that best fits your needs. You can also use /auth login inside the CLI to interactively select a provider and enter your API key.

Note: Both DIDIM_* and GEMINI_* environment variable prefixes are supported. The CLI uses a central resolveEnv() utility that checks DIDIM_* first, then falls back to GEMINI_* for backward compatibility.

Option 1: Login with Google (Gemini)

✨ Best for: Individual developers and Gemini Code Assist license holders.

didim
# Select "Login with Google" and follow the browser authentication flow

For organization accounts, set your Google Cloud project first:

export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"
didim

Option 2: Gemini API Key

✨ Best for: Developers who need specific Gemini model control.

export GEMINI_API_KEY="YOUR_API_KEY"
didim

Option 3: Claude (Anthropic)

✨ Best for: Developers who prefer Claude models (Opus, Sonnet, Haiku).

export ANTHROPIC_API_KEY="YOUR_API_KEY"
didim

Option 4: OpenAI

✨ Best for: Developers who prefer OpenAI models (GPT-4.1, o3, o4-mini).

export OPENAI_API_KEY="YOUR_API_KEY"
didim

Option 5: Vertex AI

✨ Best for: Enterprise teams and production workloads.

export GOOGLE_API_KEY="YOUR_API_KEY"
export GOOGLE_GENAI_USE_VERTEXAI=true
didim

Option 6: OpenAI-compatible (vLLM, Ollama, LM Studio, GPUStack)

✨ Best for: Local/self-hosted models and privacy-sensitive environments.

Using /auth login (Recommended):

didim
# Run /auth login, select "sLM (OpenAI-compatible endpoint)"
# Follow the 4-step wizard: URL → Server Type → Credentials → Advanced

Using environment variables:

export ENABLE_MULTI_PROVIDER=true
export LLM_PROVIDER=openai-compatible
export LLM_BASE_URL="http://localhost:8000/v1"
export LLM_MODEL="your-model-name"
didim

Limiting tools for context-constrained sLM:

Add to ~/.didim/settings.json:

{
  "tools": {
    "core": [
      "read_file",
      "search_file_content",
      "glob",
      "replace",
      "write_file",
      "run_shell_command"
    ]
  }
}

For detailed setup for each provider, see the authentication guide and provider guide.

🚀 Getting Started

Basic Usage

Start in current directory

didim

Include multiple directories

didim --include-directories ../lib,../docs

Use specific model

didim -m gemini-2.5-flash            # Gemini
didim -m claude-sonnet-4-5-20250929  # Claude
didim -m gpt-4.1                     # OpenAI

Non-interactive mode for scripts

Get a simple text response:

didim -p "Explain the architecture of this codebase"

For more advanced scripting, including how to parse JSON and handle errors, use the --output-format json flag to get structured output:

didim -p "Explain the architecture of this codebase" --output-format json

For real-time event streaming (useful for monitoring long-running operations), use --output-format stream-json to get newline-delimited JSON events:

didim -p "Run tests and deploy" --output-format stream-json

Quick Examples

Start a new project

cd new-project/
didim
> Write me a Discord bot that answers questions using a FAQ.md file I will provide

Analyze existing code

git clone https://github.com/user/project
cd project
didim
> Give me a summary of all of the changes that went in yesterday

📚 Documentation

Getting Started

Core Features

Tools & Extensions

Advanced Topics

vLLM Quick Start

export ENABLE_MULTI_PROVIDER=true
export LLM_PROVIDER=openai-compatible
export LLM_BASE_URL="http://localhost:8000/v1"
export LLM_MODEL="Qwen/Qwen2.5-7B-Instruct"
didim

Or use the interactive wizard:

didim
# /auth login → sLM → Enter URL → Select vLLM → Enter model name

Troubleshooting & Support

  • Troubleshooting Guide - Common issues and solutions.
  • FAQ - Frequently asked questions.
  • Use /bug command to report issues directly from the CLI.

Using MCP Servers

Configure MCP servers in ~/.didim/settings.json (or ~/.gemini/settings.json for backward compatibility) to extend the CLI with custom tools:

> @github List my open pull requests
> @slack Send a summary of today's commits to #dev channel
> @database Run a query to find inactive users

MCP tool naming is deterministic — tools are registered with consistent names regardless of server discovery order. Tool parameters are automatically normalized via schema-based coercion, with enhanced tolerance for sLM (small Language Model) tool call formatting.

See the MCP Server Integration guide for setup instructions.

🤝 Contributing

We welcome contributions! Gemini CLI is fully open source (Apache 2.0), and we encourage the community to:

  • Report bugs and suggest features.
  • Improve documentation.
  • Submit code improvements.
  • Share your MCP servers and extensions.

See our Contributing Guide for development setup, coding standards, and how to submit pull requests.

Check our Official Roadmap for planned features and priorities.

📖 Resources

Uninstall

See the Uninstall Guide for removal instructions.

📄 Legal


Built on Gemini CLI by Google — extended by Didim365

About

No description, website, or topics provided.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors