ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Call any LLM from the command line via models.dev.
npm install -g @blankeos/modelcli # npm
bun install -g @blankeos/modelcli # or bun
cargo install --path . # or cargo# 1. Connect to a provider (Any known provider thanks to models.dev)
modelcli connect
# 2. Browse models and set a default
modelcli models
# 3. Send a prompt
modelcli "What is the meaning of life?"modelcli [OPTIONS] [PROMPT]
| Command | Description |
|---|---|
connect |
Connect to a provider (add API key) |
models |
Browse and manage models |
| Flag | Description |
|---|---|
--model <provider/model-id> |
Model to use (overrides default) |
--stream |
Stream tokens as they arrive |
--thinking |
Show thinking/reasoning tokens |
--reasoning-effort <level> |
Reasoning effort: low, medium, or high |
--format json |
Output raw JSON instead of human-readable text |
# Use a specific model
modelcli --model openai/gpt-4o "Explain quicksort"
# Stream the response
modelcli --stream "Write a haiku about Rust"
# Enable reasoning
modelcli --thinking --reasoning-effort high "Prove that β2 is irrational"
# JSON output
modelcli --format json "Hello"You can add any OpenAI-compatible provider not listed on models.dev.
1. Add a credential:
modelcli connect
# Select "Other (custom provider)" β enter a provider ID and API key2. Configure the provider in ~/.config/modelcli.jsonc:
Then use it like any other model:
modelcli --model myprovider/my-model "Hello!"The config file is auto-created the first time you add a custom provider. Both
.jsoncand.jsonare supported, but not both at the same time.
- Credentials and app data:
~/.local/share/modelcli/ - Custom provider config:
~/.config/modelcli.jsonc
modelcli enables piping LLM calls directly from your terminalβperfect for generating commit messages in lazygit (see PR #5389), or powering any other CLI app with AI capabilities. Quickly ask questions or pipe stdout from other tools to get instant AI-powered responses.
Inspired by OpenCode's seamless multi-provider experience and built on models.dev's unified LLM API.
π¦ Made w/ Rust. A fast, minimal but intuitive CLI made with Rust.
{ "provider": { "myprovider": { "name": "My AI Provider", "baseURL": "https://api.myprovider.com/v1", "models": { "my-model": { "name": "My Model", // optional display name "reasoning": false, // optional, default false "context": 200000, // optional context window "output": 65536, // optional max output tokens }, }, }, }, }