An interactive terminal coding assistant that talks to multiple LLM providers. Supports DeepSeek (with chain-of-thought display), OpenRouter, and any OpenAI-compatible endpoint. File operations are handled through function calling — the AI reads, creates, and edits files automatically as part of the conversation.
deepseek-reasoner(default) — reasoning model with visible chain-of-thoughtdeepseek-chat— faster, cheaper general-purpose model
google/gemini-2.5-pro-previewanthropic/claude-sonnet-4anthropic/claude-3.7-sonnetopenai/gpt-4.1openai/o3-mini-high- ...and any other model available on OpenRouter
- Multi-provider support with runtime model switching (
/model set <name>) - Automatic file operations — mention a file in conversation and the AI reads it
- Recursive task completion — keeps iterating (up to 10 rounds) until the task is done, with linter feedback between iterations
- Multi-language linting — Python (flake8), JavaScript (ESLint), TypeScript (tsc + ESLint)
- MCP integration — connects to Model Context Protocol servers like Context7 for live documentation lookup
- Rich terminal UI — streaming output, color-coded feedback, model/provider indicator in the prompt
git clone https://github.com/FaustoS88/AI_Engineer
cd AI_engineer
cp .env.example .env
# add your API keys to .envInstall dependencies (pick one):
# uv (faster)
uv venv && uv run ai-engineer.py
# pip
pip install -r requirements.txt && python3 ai-engineer.pyFor JavaScript/TypeScript linting support:
npm install -g eslint typescript# .env
DEEPSEEK_API_KEY=your_key_here
OPENROUTER_API_KEY=your_key_hereYou only need the key for the provider(s) you actually use.
| Command | Description |
|---|---|
/model list |
Show all available models |
/model current |
Show current model |
/model set <name> |
Switch model |
/mcp list |
Show MCP server status |
/mcp enable <server> |
Enable an MCP server |
/mcp disable <server> |
Disable an MCP server |
/mcp reload |
Reload MCP config without restart |
/add <path> |
Preload a file or folder into context |
exit / quit |
End session |
MCP servers are configured in mcp.config.json. Example with Context7 (library docs) and Brave Search:
{
"servers": {
"context7": {
"command": "npx",
"args": ["-y", "@upstash/context7-mcp@latest"],
"description": "Up-to-date library documentation",
"enabled": true
},
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": { "BRAVE_API_KEY": "${BRAVE_API_KEY}" },
"description": "Brave Search for web queries",
"enabled": true
}
}
}Servers are lazy-initialized and hot-reloadable via /mcp reload.
The AI can call these automatically during conversation:
read_file(file_path)— read a single fileread_multiple_files(file_paths)— batch readcreate_file(file_path, content)— create or overwritecreate_multiple_files(files)— scaffold multiple files at onceedit_file(file_path, original_snippet, new_snippet)— precise snippet replacement
API key not found — check that .env exists and has the right key name.
Model not available — run /model list to see what's configured, and confirm you have the API key for that provider.
Import errors — run uv sync or pip install -r requirements.txt.
JS/TS linting not working — make sure eslint and typescript are installed globally (npm install -g eslint typescript). See MULTI_LANGUAGE_SETUP.md for details.
MIT — see LICENSE.
