A minimal, workflow-friendly CLI for ChatJimmy.ai.
Works on Linux, macOS and Windows.
By messaging Jimmy, you agree to our Terms and Conditions and acknowledge you have read our Privacy Policy.
| Tool | Required | Notes |
|---|---|---|
curl |
Yes | HTTP requests |
python3 |
Yes | JSON parsing and building (>= 3.6) |
jq |
Optional | Pretty-printing and terminal filtering |
sudo apt-get install curl python3 jqbrew install curl python3 jqmacOS note: the
jimmy.shscript is compatible with the native bash (3.2+) and zsh.
# curl is built into Windows 10/11
# PowerShell 7+ is required for jimmy.ps1
winget install Microsoft.PowerShellgit clone https://github.com/benoitpetit/jimmy.git
cd jimmy
chmod +x jimmy.shOr in one line:
curl -fsSL -o jimmy.sh https://raw.githubusercontent.com/benoitpetit/jimmy/main/jimmy.sh
chmod +x jimmy.shgit clone https://github.com/benoitpetit/jimmy.git
cd jimmy
# No chmod needed on WindowsOr in one line:
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/benoitpetit/jimmy/main/jimmy.ps1" -OutFile "jimmy.ps1"jimmy talks to the https://chatjimmy.ai/api/chat endpoint by sending a structured JSON payload:
{
"messages": [{"role": "user", "content": "your message"}],
"chatOptions": {
"selectedModel": "llama3.1-8B",
"systemPrompt": "",
"topK": 8
},
"attachment": null
}The response is streamed and then parsed to extract:
- The text content of the reply
- Generation statistics (tokens, duration, decode rate...)
| Channel | Content | Control |
|---|---|---|
stdout |
Structured JSON or plain text | --output json|raw |
stderr |
Logs and help messages | --quiet to suppress |
$ ./jimmy.sh --quiet --chat "Hello Jimmy!"{
"success": true,
"response": "Hello! How can I help you today?",
"stats": {
"created_at": 1776800631.0790017,
"done": true,
"done_reason": "stop",
"total_duration": 0.003414630889892578,
"total_tokens": 24,
"prefill_tokens": 12,
"decode_tokens": 12,
"decode_rate": 13357.656050955415
},
"model": "llama3.1-8B",
"timestamp": "2026-04-21T19:43:51Z",
"http_code": 200
}$ ./jimmy.sh --quiet --chat ""{
"success": false,
"error": "Message cannot be empty",
"http_code": 400
}| Option | Description | Default |
|---|---|---|
--chat "MSG" |
Message to send (required to chat) | — |
--model MODEL |
LLM model to use | llama3.1-8B |
--system "PROMPT" |
Custom system prompt | "" |
--topk N |
Top-K sampling (creativity / diversity) | 8 |
--file PATH |
Attach a text file (max 50 KB) | — |
--list-models |
List available models | — |
--no-stats |
Exclude generation statistics | — |
--output json|raw |
Output format | json |
--quiet |
Suppress stderr logs |
— |
-h, --help |
Show help | — |
# Simple chat
./jimmy.sh --chat "Explain special relativity"
# Raw output
./jimmy.sh --chat "Hello" --output raw --quiet
# Pipeline with jq
./jimmy.sh --quiet --chat "Capital of France?" | jq -r '.response'
# -> Paris
# System prompt + topk
./jimmy.sh --quiet --system "You talk like a pirate" --topk 16 --chat "Hello"
# Attach a file
./jimmy.sh --quiet --file report.txt --chat "Summarize this document"# Simple chat
.\jimmy.ps1 --chat "Explain special relativity"
# Raw output
.\jimmy.ps1 --chat "Hello" --output raw --quiet
# Pipeline with ConvertFrom-Json
.\jimmy.ps1 --quiet --chat "Capital of France?" | ConvertFrom-Json | Select-Object -ExpandProperty response
# -> Paris
# System prompt + topk
.\jimmy.ps1 --quiet --system "You talk like a pirate" --topk 16 --chat "Hello"
# Attach a file
.\jimmy.ps1 --quiet --file report.txt --chat "Summarize this document"# Linux / macOS
./jimmy.sh --quiet --output raw --chat "Tell me a joke"
# Windows
.\jimmy.ps1 --quiet --output raw --chat "Tell me a joke"Full example of usage in a CI/CD workflow:
name: Ask Jimmy
on:
workflow_dispatch:
inputs:
question:
description: "Question for Jimmy"
required: true
default: "Give me a productivity tip"
jobs:
ask:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Ask Jimmy
id: jimmy
run: |
RESPONSE=$(./jimmy.sh --quiet --chat "${{ inputs.question }}" | jq -r '.response')
echo "answer<<EOF" >> "$GITHUB_OUTPUT"
echo "$RESPONSE" >> "$GITHUB_OUTPUT"
echo "EOF" >> "$GITHUB_OUTPUT"
- name: Display the answer
run: |
echo "## Jimmy's answer" >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
echo "${{ steps.jimmy.outputs.answer }}" >> "$GITHUB_STEP_SUMMARY"- name: Summarize a changelog
run: |
SUMMARY=$(./jimmy.sh --quiet --file CHANGELOG.md --chat "Summarize the changes in 3 bullets" | jq -r '.response')
echo "$SUMMARY"| Case | stdout |
Exit code |
|---|---|---|
| Success | JSON success: true |
0 |
| HTTP error | JSON success: false + detail |
1 |
| Empty message | JSON "Message cannot be empty" |
1 |
| File not found | JSON "File not found: ..." |
1 |
| File > 50 KB | JSON "File exceeds 50KB ..." |
1 |
| Missing argument | JSON "Missing value for ..." |
1 |
| No arguments | Help on stderr |
1 |
jimmy.sh / jimmy.ps1
├── check_deps() # Checks curl + python3
├── _http_request() # curl request (GET/POST)
├── build_chat_payload() # JSON payload building
├── parse_chat_response() # Content + stats extraction (regex)
├── list_models() # Lists models via /api/models
├── send_chat() # Sends a message via /api/chat
└── prepare_attachment() # Encodes a file as JSON attachment
set -euo pipefail(bash) /$ErrorActionPreference = "Stop"(PowerShell)- Zero heavy external dependencies: only
curlandpython3 - Robust stats parsing: supports both
<!--stats-->and<|stats|>formats - Safe JSON encoding: all strings go through
json.dumps()/ConvertTo-Json
| Platform | Script | Status |
|---|---|---|
| Linux | jimmy.sh |
Tested |
| macOS | jimmy.sh |
Compatible with bash 3.2+ |
| Windows | jimmy.ps1 |
PowerShell 7+ |
- Repository: github.com/benoitpetit/jimmy
- API: chatjimmy.ai
MIT
