Jig turns natural-language briefs into JSON Schema + system prompt pairings, then runs deterministic inference against your local models. It highlights the multimodal
zai-org/GLM-4.6checkpoint while remaining compatible with every model you can host in LM Studio or Ollama (Llama, DeepSeek, Mistral, etc.).
Jig exists for repeatable LLM inference: it first co-designs the schema and system prompt you need, then guarantees that every future run follows that contract—perfect for production ETL, reporting, or any workflow where ad-hoc prompts are too brittle.
📑 Table of Contents
- 2026-02-05 – Added new CLI/Docs, inline pairing previews, and the manual editor tab.
- 2026-02-04 – Inference tab now previews the selected pairing (schema + prompt).
- Pairing lifecycle – Author schemas with the Creator, preview/test them in Inference, then fine-tune every field inside the Editor.
- Recommended model – Pull
zai-org/GLM-4.6once in Ollama/LM Studio for a single text+vision checkpoint; swap to Llama 3, DeepSeek-VL, Phi, etc. whenever you like. - Structured enforcement – JSON Schema (Draft 7) is applied via LM Studio's
response_formator Ollama'sformat, so outputs never drift. - Vision ready – Drag/drop multiple images into the Image tab; Jig handles base64 plumbing for GLM-4.6, Llava, Moondream, DeepSeek-VL, and friends.
- Local-first – No cloud calls. Everything happens against your LM Studio or Ollama runtime.
| Without Jig | With Jig |
|---|---|
| Prompt drift across runs | Schema-locked outputs |
| Hand-written parsing logic | Auto validation + JSON Schema |
| No visibility into changes | Versioned pairings on disk |
This walkthrough shows how the three Gradio tabs work together to build and run a real pairing.
“Tag incidents with urgency (critical/high/medium/low) and impacted system so on-call engineers can instantly see what needs attention and where.”
Describe the task in plain English and Jig generates the schema + system prompt.
jig create "Tag incidents with urgency (critical/high/medium/low) and list the impacted system" -n auto_triage
Generated schema (auto_triage/schema.json):
{
"type": "object",
"description": "The root object representing an incident tag.",
"properties": {
"urgency": {
"type": "string",
"description": "The urgency level of the incident. Must be one of 'critical', 'high', 'medium', or 'low'.",
"enum": ["critical", "high", "medium", "low"]
},
"impacted_system": {
"type": "string",
"description": "The system that is impacted by this incident."
}
},
"required": ["urgency", "impacted_system"],
"additionalProperties": false
}Generated system prompt (auto_triage/prompt.txt):
You are an expert in creating precise, well-structured JSON Schemas. Your task is to generate a JSON Schema that defines the structure for tagging incidents with two key pieces of information: urgency level (critical/high/medium/low) and impacted system. The schema must adhere strictly to the specified rules: use strict typing with additionalProperties set to false, ensure all properties are required, include clear descriptions for each field, and follow JSON Schema Draft 7.
Select any pairing, review the schema/prompt preview on the right, and run deterministic inference with text, files, or images.
jig run -s auto_triage -i "Users are reporting that they can’t complete checkout on the web app. Payments fail with a 502 error when submitting the order. The issue seems isolated to the payments service. Mobile app appears unaffected for now. Revenue impact likely if not resolved quickly."
Output
{
"urgency": "critical",
"impacted_system": "payments"
}Manually tweak schemas or prompts, reformat JSON, and save the pairing—all without leaving the UI.
from jig import SchemaAgent, create_client
client = create_client(backend="lmstudio") # or backend="ollama"
agent = SchemaAgent(client)
incident = (
"Users report checkout failures (502 errors) on the web app. "
"Payments service is impacted; mobile unaffected. Revenue at risk."
)
result = agent.run(incident, "auto_triage")
print(result)Follow this step-by-step guide to go from idea to deterministic outputs in a few minutes.
# Fastest: install directly from GitHub
pip install git+https://github.com/Leodotpy/jig.git
# Or clone for local development
git clone https://github.com/Leodotpy/jig.git
cd jig
pip install -e .
pip install -e ".[gradio]" # optional UI extras| Runtime | What to do |
|---|---|
| LM Studio | Load zai-org/GLM-4.6 (or any other model), enable the local server under the Developer tab (default port 1234). |
| Ollama | ollama serve → ollama pull zai-org/GLM-4.6 (multi-modal). Still works with llama3, deepseek:*, etc. Default port 11434/11435. |
Tip
If LM Studio isn't running, --backend auto will transparently fall back to Ollama.
Use natural language to tell Jig what you want; it will craft the system prompt + JSON Schema pairing for repeatable runs.
jig create "Extract meeting details: attendees, decisions, follow-ups" -n meetingFeed the model any text (and optional images/files); Jig enforces the schema it generated.
jig run -s meeting -i "Meeting on Feb 3: Alice assigned budget analysis." -o result.jsonjig --gradio- Creator tab – Generate fresh pairings from a plain-English brief.
- Inference tab – Preview schema & prompt before you hit "Run Inference" (text/file/image inputs supported).
- Editor tab – Pick any pairing (or type a new name) and hand-edit schema + prompt with live validation.
- Creator
- Describe the task → Jig returns
response_schema,system_prompt, and metadata stored underpairings/<name>/. - Force overwrite with the checkbox, or let auto backups protect existing work.
- Describe the task → Jig returns
- Inference
- Select a pairing; the right rail mirrors its JSON Schema & prompt so you can sanity check before execution.
- Mix modalities: text area,
.txtupload, and multi-image gallery all funnel into the same run. - Stream results or save directly to disk.
- Editor
- Dropdown (with custom values) lets you load any pairing or start blank.
- Schema formatter button pretty-prints + validates JSON before saving.
- Saving writes
schema.json,prompt.txt, and updatesmeta.jsondescriptions instantly.
| Capability | Suggested model | Notes |
|---|---|---|
| Text + Vision (default) | zai-org/GLM-4.6 |
Single checkpoint handles both modalities; works in LM Studio and Ollama. |
| Text-only | llama3, deepseek-r1, phi-4, etc. |
Use when you only need deterministic text extraction. |
| Vision-focused | deepseek-vl, llava, moondream |
Great for OCR, UI parsing, etc. Images upload via Gradio Image tab or CLI --image. |
Jig targets the OpenAI-compatible (LM Studio) or Ollama-native APIs, so any local model that those runtimes expose is fair game.
pairings/
├── meeting/
│ ├── schema.json # JSON Schema enforced at inference time
│ ├── prompt.txt # System prompt / instructions
│ └── meta.json # Description, model provenance, timestamps
├── invoice/
│ └── ...
Manual edits? Use the Editor tab or edit these files directly—Jig will pick them up instantly. Overwrites create timestamped backups (*_backup_YYYYMMDD_HHMMSS).
| Command | Purpose |
|---|---|
jig create ... |
Turn a natural-language description into a pairing. |
jig run ... |
Execute inference with text / file / --image inputs. |
jig list |
Show all pairings with completion status. |
jig show <name> |
Print schema + prompt contents in the terminal. |
jig models [--set <name>] |
List models discovered via the current backend or change the active one. |
jig --gradio |
Launch the Gradio UI (Creator · Inference · Editor). |
Common flags: --backend lmstudio|ollama|auto, --model <id>, --temperature, --image path, --output result.json.
We welcome improvements in the classic GitHub workflow:
- Fork & clone this repository.
- Create a feature branch (
git checkout -b feat/my-change). - Install dev extras:
pip install -e ".[dev]". - Format with Black: run
black .(VS Code users can enable “Format on Save” with Black). - Lint & test:
ruff check .andpytest. - Open a pull request describing the change and referencing any issues.
Bug reports or feature suggestions via GitHub Issues are equally appreciated.
- Multi-step agent workflows that chain multiple pairings.
- Support for audio-capable LLMs.
See SECURITY.md for coordinated disclosure guidelines.
MIT – see LICENSE.


