Minimal CLI for querying LLMs. Works with any OpenAI-compatible API.
llm "What is the capital of France?"
cat error.log | llm "What's wrong?"Conversations are automatically saved and resumed. Use -c or --clear to start fresh:
llm -c # clear context only
llm -c "New topic" # clear and start new conversationmake installThis builds the binary, installs to /usr/local/bin, and creates ~/.shllm/config.toml if it doesn't exist. Edit the config to add your API key.
Config is loaded from ./config.toml first, then ~/.shllm/config.toml:
api_url = "https://api.openai.com/v1/chat/completions"
api_key = "sk-..."
model = "gpt-4"
system_prompt = "Be concise."
stream = true
timeout = 120
word_wrap = 100| Option | Required | Default |
|---|---|---|
api_url |
yes | - |
api_key |
yes | - |
model |
yes | - |
system_prompt |
no | none |
stream |
no | false |
timeout |
no | 120s |
word_wrap |
no | 100 |
Works with OpenAI, OpenRouter, Ollama, LM Studio, Azure, or any compatible endpoint.
MIT