Transform various AI CLI tools (such as qodercli, cursor, claude, etc.) into a unified OpenAI-compatible HTTP API server with a modern web management interface.
- 🚀 OpenAI-Compatible API: Fully compatible with OpenAI API endpoints, seamlessly integrate into existing applications
- ⚖️ Smart Load Balancing: Support multiple CLI tools simultaneously with automatic round-robin load balancing
- 🎯 Flexible Control: Manually specify which CLI tool to use via
X-CLI-Toolheader, or let the system auto-select - 🔧 Multi-CLI Support: Compatible with qodercli, cursor, claude, and other AI CLI tools
- 🌊 Streaming/Non-Streaming: Support both streaming and non-streaming output modes
- 🎨 Modern UI: Beautiful management interface built with React + Material-UI
- ⚙️ Configuration Persistence: Config files stored in user home directory with automatic save/load
- 🔐 User Authentication: JWT-based authentication with user management system
- 🎭 Cyberpunk Theme: Modern, animated cyberpunk-style UI with neon effects
- Go 1.21+
- Node.js 18+
- CLI tools you want to use (e.g., qodercli, cursor-cli, etc.)
# Clone the repository
git clone https://github.com/vibe-coding-labs/xx-cli-to-api-server.git
cd xx-cli-to-api-server
# Install Go dependencies
go mod download
# Build the binary
go build -o xx-cli-to-api
# Install to system PATH (recommended)
./xx-cli-to-api install
# Or run directly (without installation)
go run main.go serveBenefits of installing to system PATH:
- ✅ Run
xx-cli-to-apicommands from anywhere - ✅ No need to specify full path
- ✅ Supports macOS, Linux, and Windows
For detailed installation options and troubleshooting, see INSTALLATION.md.
To uninstall:
xx-cli-to-api uninstallcd web
# Install dependencies
npm install
# Development mode
npm run dev
# Production build
npm run build# Use default port 25486
./xx-cli-to-api serve
# Specify custom port
./xx-cli-to-api serve -p 8080# Start in background using default port 25486
xx-cli-to-api start
# Start in background with custom port
xx-cli-to-api start -p 8080
# Check server status
xx-cli-to-api status
# Stop the server
xx-cli-to-api stop
# View real-time logs
tail -f ~/.xx-cli-to-api/xx-cli-to-api.logAdvantages of background mode:
- ✅ Daemon mode - continues running after terminal is closed
- ✅ Colorful log output for easy debugging
- ✅ Automatic logging to file
- ✅ Complete process management (start/stop/status)
After the server starts:
- API endpoint:
http://localhost:25486/v1/chat/completions - Web interface:
http://localhost:25486(production) orhttp://localhost:3000(development)
For detailed start/stop command usage, see START_STOP_COMMANDS.md.
On first run, you'll be prompted to create an admin user:
- Visit
http://localhost:25486in your browser - You'll be redirected to the setup page
- Fill in the admin username and password
- Click "Create Admin User" to complete setup
# Create admin user
xx-cli-to-api user create <username>
# List all users
xx-cli-to-api user list
# Delete a user
xx-cli-to-api user delete <username>- Log in to the web interface
- Navigate to "CLI Settings" page
- Click "Add Tool" to add your installed CLI tools
- Click "Refresh Status" to detect installations
- Enable the installed tools
Support multiple CLI tools simultaneously for load balancing!
- Go to the "Documentation" page
- Find the "API Token" section
- Click "Generate API Token"
- Copy and save your token securely
On the "CLI Test" page, choose:
- Auto Load Balancing: System automatically distributes requests among all enabled CLI tools
- Manual Selection: Specify a particular CLI tool to use
# Auto load balancing
curl -X POST http://localhost:25486/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello"}],
"stream": false
}'
# Manually specify CLI tool (using X-CLI-Tool header)
curl -X POST http://localhost:25486/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "X-CLI-Tool: qodercli" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello"}],
"stream": true
}'All API requests (except health check and setup) require authentication using JWT tokens.
Request Header:
Authorization: Bearer YOUR_API_TOKEN
POST /v1/chat/completions
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Model name (use any OpenAI model name) |
| messages | array | Yes | Message array, same format as OpenAI API |
| stream | boolean | No | Whether to use streaming output, default false |
| max_turns | integer | No | Maximum execution rounds for CLI tool, default 10 |
| workspace | string | No | Specify working directory for CLI tool |
| Header | Required | Description |
|---|---|---|
| Content-Type | Yes | Must be application/json |
| Authorization | Yes | Bearer token for authentication |
| X-CLI-Tool | No | Manually specify which CLI tool to use; auto load-balance if not specified |
| Header | Description |
|---|---|
| X-Used-CLI-Tool | Indicates which CLI tool was actually used |
Fully compatible with OpenAI API response format:
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "qodercli",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "Response content"
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}All management endpoints require authentication.
| Endpoint | Method | Description |
|---|---|---|
/api/tools |
GET | Get all tools |
/api/tools/:name |
GET | Get single tool |
/api/tools/:name |
PUT | Update tool |
/api/tools |
POST | Add new tool |
/api/tools/:name |
DELETE | Delete tool |
/api/tools/refresh |
POST | Refresh tool status |
/api/config |
GET | Get configuration |
/api/config |
PUT | Update configuration |
- Project introduction and core features showcase
- How it works explanation
- Quick start guide
- API call examples
- View all configured CLI tools
- Add/delete CLI tools
- Enable/disable tools
- Refresh installation status
- View load balancing status
- Test CLI tools online
- Choose auto load balancing or manual tool selection
- Support streaming and non-streaming output
- Real-time response viewing
- Display equivalent API call commands
- Complete user guide
- API documentation
- Integration examples (Python, JavaScript, etc.)
- Load balancing mechanism explanation
- FAQ section
- User management
- API token generation
- Server configuration
- Security settings
When multiple CLI tools are enabled, the system automatically performs load balancing:
- Round-Robin Strategy: Sequentially distributes requests to different CLI tools
- Health Check: Only installed and enabled tools participate in load balancing
- Failover: If one tool fails, tries the next tool (optional)
To use a specific CLI tool, you have two options:
Option 1: Web Interface Select "Manual Tool Selection" mode on the "CLI Test" page
Option 2: API Request Header
Add X-CLI-Tool request header to specify tool name
curl -X POST http://localhost:25486/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "X-CLI-Tool: qodercli" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4", "messages": [...]}'import openai
# Configure client
openai.api_base = "http://localhost:25486/v1"
openai.api_key = "YOUR_API_TOKEN"
# Auto load balancing
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)
# Manually specify CLI tool
import requests
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer YOUR_API_TOKEN",
"X-CLI-Tool": "qodercli"
}
response = requests.post(
"http://localhost:25486/v1/chat/completions",
headers=headers,
json={"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}
)import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'http://localhost:25486/v1',
apiKey: 'YOUR_API_TOKEN',
});
// Auto load balancing
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello' }],
});
// Manually specify CLI tool
const response2 = await fetch('http://localhost:25486/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_API_TOKEN',
'X-CLI-Tool': 'qodercli',
},
body: JSON.stringify({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello' }],
}),
});- Open OpenAI-related settings
- Set API Base URL to
http://localhost:25486/v1 - Set API Key to your generated token
Supports any tool that can customize OpenAI endpoints:
- Continue.dev
- Chatbox
- Various ChatGPT clients
- LangChain
- Other frameworks supporting OpenAI API
Configuration file located at ~/.xx-cli-to-api/config.json
Example configuration:
{
"port": 25486,
"serverUrl": "http://localhost:25486",
"tools": [
{
"name": "qodercli",
"displayName": "Qoder CLI",
"command": "qodercli",
"enabled": true,
"installed": true,
"version": "1.0.0",
"updatedAt": "2024-01-01T00:00:00Z"
},
{
"name": "cursor",
"displayName": "Cursor AI",
"command": "cursor-cli",
"enabled": true,
"installed": true,
"version": "0.9.0",
"updatedAt": "2024-01-01T00:00:00Z"
}
]
}.
├── main.go # Entry point
├── cmd/ # Cobra commands
│ ├── root.go # Root command
│ ├── serve.go # Server command (foreground)
│ ├── start.go # Background start/stop/status commands
│ ├── tools.go # Tool management commands
│ ├── install.go # Install/uninstall commands
│ └── user.go # User management commands
├── api/ # API layer
│ ├── router.go # Route configuration
│ ├── handlers.go # Request handlers
│ ├── chat_handlers.go # Chat API handlers
│ ├── auth_handlers.go # Authentication handlers
│ └── auth_middleware.go # JWT middleware
├── config/ # Configuration management
│ ├── config.go # Config manager
│ └── user_manager.go # User manager
├── models/ # Data models
│ ├── cli_tool.go # CLI tool model
│ └── user.go # User model
├── service/ # Business logic
│ └── cli_service.go # CLI service (with load balancing)
├── utils/ # Utilities
│ ├── jwt.go # JWT token handling
│ └── api_token.go # API token generation
└── web/ # Frontend project
├── src/
│ ├── components/ # React components
│ ├── pages/ # Page components
│ │ ├── Home.tsx # Home page
│ │ ├── CLISettings.tsx # CLI settings
│ │ ├── CLITest.tsx # CLI test
│ │ ├── Documentation.tsx # Documentation
│ │ ├── Login.tsx # Login page
│ │ ├── Setup.tsx # Initial setup
│ │ └── Settings.tsx # Settings
│ ├── services/ # API services
│ ├── contexts/ # React contexts
│ └── types/ # TypeScript types
└── package.json
- Convert local CLI tools into remote API services
- Share AI programming assistants within teams
- Provide standardized AI interfaces for other applications
- Build your own AI code assistant platform
- Unified management and load balancing of multiple AI tools
- Improve availability and stability of AI tools
A: Please ensure:
- CLI tool is correctly installed on your system
- Command can be run directly in terminal
- Click "Refresh Status" button on CLI Settings page
A: Theoretically no limit, but we recommend 2-5 tools for optimal performance and management experience.
A: The server adds an X-Used-CLI-Tool field in the response headers indicating which tool was actually used.
A: Theoretically supports all AI CLI tools that follow a specific command-line format. Currently tested:
- qodercli
- cursor (command-line version)
- claude-cli
- Other compatible CLI tools
A: No. All configurations are saved in ~/.xx-cli-to-api/config.json and automatically loaded on restart.
A: Delete the user and create a new one:
xx-cli-to-api user delete admin
xx-cli-to-api user create admin- JWT-based authentication
- Bcrypt password hashing
- API token generation
- CORS configuration
- Input validation
- Error handling
- Use HTTPS
- Configure firewall rules
- Set up rate limiting
- Enable log monitoring
- Regular security updates
- Strong password policies
Contributions are welcome! Please feel free to submit issues or pull requests.
- Fork this repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
For more details, see CONTRIBUTING.md.
This project is licensed under the MIT License. See the LICENSE file for details.
- Gin - High-performance HTTP framework
- Cobra - Powerful CLI framework
- React - User interface library
- Material-UI - React UI component library
- JWT-Go - JWT implementation for Go
- OpenAI - API specification reference
- 📝 Create an Issue
- 📧 Email the maintainers
- 💬 Check project documentation
Made with ❤️ by Vibe Coding Labs