Skip to content

benoitpetit/jimmy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Jimmy logo

jimmyflow

GitHub API Status Platforms License

A minimal, workflow-friendly CLI for ChatJimmy.ai.
Works on Linux, macOS and Windows.


Disclaimer

By messaging Jimmy, you agree to our Terms and Conditions and acknowledge you have read our Privacy Policy.

Prerequisites

Tool Required Notes
curl Yes HTTP requests
python3 Yes JSON parsing and building (>= 3.6)
jq Optional Pretty-printing and terminal filtering

Linux / Ubuntu / Debian

sudo apt-get install curl python3 jq

macOS

brew install curl python3 jq

macOS note: the jimmy.sh script is compatible with the native bash (3.2+) and zsh.

Windows

# curl is built into Windows 10/11
# PowerShell 7+ is required for jimmy.ps1
winget install Microsoft.PowerShell

Installation

Linux & macOS

git clone https://github.com/benoitpetit/jimmy.git
cd jimmy
chmod +x jimmy.sh

Or in one line:

curl -fsSL -o jimmy.sh https://raw.githubusercontent.com/benoitpetit/jimmy/main/jimmy.sh
chmod +x jimmy.sh

Windows

git clone https://github.com/benoitpetit/jimmy.git
cd jimmy
# No chmod needed on Windows

Or in one line:

Invoke-WebRequest -Uri "https://raw.githubusercontent.com/benoitpetit/jimmy/main/jimmy.ps1" -OutFile "jimmy.ps1"

How it works

jimmy talks to the https://chatjimmy.ai/api/chat endpoint by sending a structured JSON payload:

{
  "messages": [{"role": "user", "content": "your message"}],
  "chatOptions": {
    "selectedModel": "llama3.1-8B",
    "systemPrompt": "",
    "topK": 8
  },
  "attachment": null
}

The response is streamed and then parsed to extract:

  • The text content of the reply
  • Generation statistics (tokens, duration, decode rate...)

Output streams

Channel Content Control
stdout Structured JSON or plain text --output json|raw
stderr Logs and help messages --quiet to suppress

Default JSON format

$ ./jimmy.sh --quiet --chat "Hello Jimmy!"
{
  "success": true,
  "response": "Hello! How can I help you today?",
  "stats": {
    "created_at": 1776800631.0790017,
    "done": true,
    "done_reason": "stop",
    "total_duration": 0.003414630889892578,
    "total_tokens": 24,
    "prefill_tokens": 12,
    "decode_tokens": 12,
    "decode_rate": 13357.656050955415
  },
  "model": "llama3.1-8B",
  "timestamp": "2026-04-21T19:43:51Z",
  "http_code": 200
}

Error format

$ ./jimmy.sh --quiet --chat ""
{
  "success": false,
  "error": "Message cannot be empty",
  "http_code": 400
}

Options

Option Description Default
--chat "MSG" Message to send (required to chat)
--model MODEL LLM model to use llama3.1-8B
--system "PROMPT" Custom system prompt ""
--topk N Top-K sampling (creativity / diversity) 8
--file PATH Attach a text file (max 50 KB)
--list-models List available models
--no-stats Exclude generation statistics
--output json|raw Output format json
--quiet Suppress stderr logs
-h, --help Show help

Usage examples

Linux / macOS (Bash)

# Simple chat
./jimmy.sh --chat "Explain special relativity"

# Raw output
./jimmy.sh --chat "Hello" --output raw --quiet

# Pipeline with jq
./jimmy.sh --quiet --chat "Capital of France?" | jq -r '.response'
# -> Paris

# System prompt + topk
./jimmy.sh --quiet --system "You talk like a pirate" --topk 16 --chat "Hello"

# Attach a file
./jimmy.sh --quiet --file report.txt --chat "Summarize this document"

Windows (PowerShell)

# Simple chat
.\jimmy.ps1 --chat "Explain special relativity"

# Raw output
.\jimmy.ps1 --chat "Hello" --output raw --quiet

# Pipeline with ConvertFrom-Json
.\jimmy.ps1 --quiet --chat "Capital of France?" | ConvertFrom-Json | Select-Object -ExpandProperty response
# -> Paris

# System prompt + topk
.\jimmy.ps1 --quiet --system "You talk like a pirate" --topk 16 --chat "Hello"

# Attach a file
.\jimmy.ps1 --quiet --file report.txt --chat "Summarize this document"

Raw mode

# Linux / macOS
./jimmy.sh --quiet --output raw --chat "Tell me a joke"

# Windows
.\jimmy.ps1 --quiet --output raw --chat "Tell me a joke"

GitHub Actions

Full example of usage in a CI/CD workflow:

name: Ask Jimmy

on:
  workflow_dispatch:
    inputs:
      question:
        description: "Question for Jimmy"
        required: true
        default: "Give me a productivity tip"

jobs:
  ask:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Ask Jimmy
        id: jimmy
        run: |
          RESPONSE=$(./jimmy.sh --quiet --chat "${{ inputs.question }}" | jq -r '.response')
          echo "answer<<EOF" >> "$GITHUB_OUTPUT"
          echo "$RESPONSE" >> "$GITHUB_OUTPUT"
          echo "EOF" >> "$GITHUB_OUTPUT"

      - name: Display the answer
        run: |
          echo "## Jimmy's answer" >> "$GITHUB_STEP_SUMMARY"
          echo "" >> "$GITHUB_STEP_SUMMARY"
          echo "${{ steps.jimmy.outputs.answer }}" >> "$GITHUB_STEP_SUMMARY"

Usage in another workflow

- name: Summarize a changelog
  run: |
    SUMMARY=$(./jimmy.sh --quiet --file CHANGELOG.md --chat "Summarize the changes in 3 bullets" | jq -r '.response')
    echo "$SUMMARY"

Error handling

Case stdout Exit code
Success JSON success: true 0
HTTP error JSON success: false + detail 1
Empty message JSON "Message cannot be empty" 1
File not found JSON "File not found: ..." 1
File > 50 KB JSON "File exceeds 50KB ..." 1
Missing argument JSON "Missing value for ..." 1
No arguments Help on stderr 1

Architecture

jimmy.sh / jimmy.ps1
├── check_deps()          # Checks curl + python3
├── _http_request()       # curl request (GET/POST)
├── build_chat_payload()  # JSON payload building
├── parse_chat_response() # Content + stats extraction (regex)
├── list_models()         # Lists models via /api/models
├── send_chat()           # Sends a message via /api/chat
└── prepare_attachment()  # Encodes a file as JSON attachment

Key points

  • set -euo pipefail (bash) / $ErrorActionPreference = "Stop" (PowerShell)
  • Zero heavy external dependencies: only curl and python3
  • Robust stats parsing: supports both <!--stats--> and <|stats|> formats
  • Safe JSON encoding: all strings go through json.dumps() / ConvertTo-Json

Compatibility

Platform Script Status
Linux jimmy.sh Tested
macOS jimmy.sh Compatible with bash 3.2+
Windows jimmy.ps1 PowerShell 7+

Resources


License

MIT

About

A minimal, workflow-friendly CLI for ChatJimmy.ai. Works on Linux, macOS and Windows.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors