A tiny local web UI for demonstrating and testing natural-language-to-JSON workflows with json-contracts. This package imports the json-contracts MCP package for contract loading, tool handlers, and JSON Schema validation.
It is organized around a hideable configuration sidebar and two workflow tabs:
- Configuration sidebar — choose the contract repository folder and LLM provider/model once, then use the left-edge arrow to hide or reveal the sidebar when you are ready to work.
- Generate JSON — choose a contract in Step 1, then turn natural-language input into JSON, validate, edit, and repair JSON against it.
- New contract — generate a draft contract file from a natural-language app behavior, validate it, and save it into the configured contract repository.
Provider calls are demo-only Studio behavior. The production MCP stdio server still does not call LLMs and does not use API keys.
Initialize starter contracts with the main MCP package first, then launch Studio:
mkdir my-json-contracts
cd my-json-contracts
npx -y json-contracts@latest init
npx -y json-contracts-studio@latestOpen:
http://127.0.0.1:5177
If you already have a contract folder, skip init and run Studio with JSON_CONTRACTS_DIR pointed at your folder:
JSON_CONTRACTS_DIR=/absolute/path/to/json-contracts npx -y json-contracts-studio@latestOr from this repository root for development:
npm install
npm startThen:
- Review the Configuration sidebar for the contract repository and LLM provider/model. Use the left-edge arrow to hide it if you want more room.
- Pick a contract in the main Step 1 panel, or click New contract to create one. Step panels have prominent Collapse/Expand buttons when you want to reduce visual noise.
- Choose Create new JSON or Edit existing JSON when testing an existing contract.
- Generate or paste JSON, or generate a contract draft.
- Validate.
- Repair or save if needed.
Use a different port if needed:
STUDIO_PORT=8080 npm startOn Windows PowerShell:
$env:STUDIO_PORT=8080; npm startUse New contract to turn an app behavior description into a contract file.
Flow:
- Choose provider/model in the Configuration sidebar.
- Click New contract.
- Enter a contract name, behavior description, optional desired fields, example input, and context.
- Click Generate contract.
- Review the generated draft JSON.
- Click Validate draft.
- Click Save contract.
Saving writes to the configured contract repository:
{contractRepository}/{contractName}.json
Then Studio reloads contracts and selects the new contract for testing.
Draft validation checks:
- safe contract filename
- no
versionfield - contract shape
- valid JSON Schema
- generated examples include
inputandoutput - generated example outputs validate against the generated schema
Studio refuses to overwrite an existing file unless Overwrite existing is checked.
Step 1 includes a Schema Explorer so users do not have to read a giant raw JSON Schema in a tiny box.
The collapsed Rules section has an editor for changing contract rules and saving them back to the selected contract file. Rules are one per line, and Studio warns before permanently writing the file.
Schema Explorer shows:
- searchable field tree
- field paths, types, required badges, formats, enum/const markers, and example values
- summary cards for total fields, required fields, enums, and whether undeclared output fields are rejected
- Expand all, Collapse all, and Copy schema actions
- a View raw JSON / View explorer button that swaps between the friendly explorer and the full raw JSON Schema
- edit-only + key and + section controls for adding top-level schema properties from the explorer
- edit-only per-field Rename key, Delete key, Add child key, and Add child section actions for object fields/sections
- an Edit schema button for editing the JSON Schema directly and saving it back to the selected contract file. The save button stays disabled while the edited schema is identical, and Studio warns that saving permanently changes the file.
The Generate JSON and New contract flows include an App context (optional) toolbox. Use it for generic app/system variables your app might already know, such as current date/time, created/updated timestamps, unique IDs, user, session, or workspace.
Click preset buttons like Current date/time, Created at, Unique ID, User, Session, or Workspace to merge generated context into the JSON field. Pretty format parses and formats manually edited context.
Example context:
{
"current_datetime": "2026-05-04T10:30:00.000Z",
"current_date": "2026-05-04",
"timezone": "America/New_York",
"locale": "en-US",
"user": {
"id": "user_123",
"email": "user@example.com",
"name": "Jane Doe",
"role": "member"
},
"unique_id": "7b0f58e0-4c3d-4e13-9a3d-7ec7255e8d9f"
}The Studio sends this as the context object in get_json_contract or get_edit_contract. json-contracts passes context through unchanged; it does not hard-code date logic or business logic.
Important behavior:
- Context helps the model interpret the input.
- The final JSON still must match the selected contract schema.
- Context fields are not automatically copied into output JSON.
- Context is sent to the selected provider, so do not include secrets unless the provider should receive them.
The fastest live test is:
- Open the Studio.
- In Configuration, choose a provider.
- Paste an API key, or use a key from
.env. - Type the model name you want to test.
- Choose a thinking level.
- Optionally tick Save config and Save as default provider/model.
- In Generate JSON, choose Create new JSON or Edit existing JSON. For edit, paste the existing JSON and describe the requested change.
- Click Generate JSON or Generate edited JSON.
Keys pasted into the browser are sent only to the local Studio server for that request. They are saved only when you tick Save config, which writes to your local project .env. The server binds to 127.0.0.1 by default.
Supported provider presets:
| Provider | Adapter | Default base URL |
|---|---|---|
| OpenAI | OpenAI-compatible chat completions | https://api.openai.com/v1 |
| Anthropic | Native Anthropic Messages API | https://api.anthropic.com/v1 |
| Ollama Cloud | Native Ollama chat API | https://ollama.com/api |
| Local Ollama | Native Ollama chat API | http://127.0.0.1:11434/api |
| OpenRouter | OpenAI-compatible chat completions | https://openrouter.ai/api/v1 |
| Groq | OpenAI-compatible chat completions | https://api.groq.com/openai/v1 |
| Together AI | OpenAI-compatible chat completions | https://api.together.xyz/v1 |
| Fireworks AI | OpenAI-compatible chat completions | https://api.fireworks.ai/inference/v1 |
| DeepSeek | OpenAI-compatible chat completions | https://api.deepseek.com/v1 |
| Mistral | OpenAI-compatible chat completions | https://api.mistral.ai/v1 |
| Custom OpenAI-compatible | OpenAI-compatible chat completions | user-entered |
For built-in providers, leave Base URL override blank. The Studio automatically uses the correct cloud API default for the selected provider.
Only enter a Base URL when:
- you select Custom OpenAI-compatible
- your provider/account uses a special endpoint
- you are testing a proxy or local gateway
Examples:
https://api.openai.com/v1
https://openrouter.ai/api/v1
http://127.0.0.1:11434/api
Do not include /chat/completions for OpenAI-compatible providers; the Studio appends that path.
Model is a free-text field. Examples:
gpt-4.1-mini
claude-sonnet-4-5
kimi-k2.5
llama-3.3-70b-versatile
openai/gpt-4.1-mini
deepseek-chat
Thinking levels:
off, low, medium, high, xhigh, auto
Thinking is provider/model-dependent:
- Ollama receives
thinkaslow,medium,high,xhigh,false, or the field is omitted forauto. - OpenAI-compatible providers receive
reasoning_effortwhere supported.xhighmaps to the provider's highest compatible effort when needed. - Anthropic receives extended-thinking budgets where supported: low, medium, high, or xhigh.
- Providers/models that do not support thinking may ignore it or reject it. The Studio retries OpenAI-compatible requests without unsupported JSON/ reasoning fields when possible.
By default, the Studio does not write provider settings or API keys anywhere. In Configuration, you can opt in:
- Tick Save config to write the selected provider's model, thinking level, pasted API key, and base URL override to the local
.envimmediately. While checked, provider config field changes are saved again automatically. - Tick Save as default provider/model to also set
LLM_PROVIDERso the saved provider/model is selected on the next load.
API keys are written to provider-specific variables such as OPENAI_API_KEY, ANTHROPIC_API_KEY, or OPENROUTER_API_KEY. The custom OpenAI-compatible provider uses LLM_API_KEY. This keeps saved keys tied to the provider you selected instead of storing them in browser storage.
Contracts can define top-level operations metadata. If omitted, the server treats both create and edit as enabled by default. Edit mode sends currentJson separately from context, asks the model to preserve unspecified fields, and expects the complete updated JSON object back.
Example edit request:
{
"currentJson": {
"status": "open",
"limit": 50
},
"input": "we want the last 20 closed tickets"
}Expected edited JSON:
{
"status": "closed",
"limit": 20
}Instead of pasting a key into the UI each time, either tick Save config in the Studio or copy:
cp .env.example .envWindows PowerShell:
Copy-Item .env.example .envThen fill in values:
LLM_PROVIDER=openai
OPENAI_API_KEY=your_key_here
OPENAI_MODEL=gpt-4.1-mini
LLM_THINKING=mediumOllama Cloud:
LLM_PROVIDER=ollama-cloud
OLLAMA_API_KEY=your_key_here
OLLAMA_MODEL=kimi-k2.5
OLLAMA_THINKING=mediumLocal Ollama:
LLM_PROVIDER=ollama-local
OLLAMA_BASE_URL=http://127.0.0.1:11434/api
OLLAMA_MODEL=llama3.2Anthropic:
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=your_key_here
ANTHROPIC_MODEL=claude-sonnet-4-5
LLM_THINKING=mediumOpenRouter:
LLM_PROVIDER=openrouter
OPENROUTER_API_KEY=your_key_here
OPENROUTER_MODEL=openai/gpt-4.1-mini| Variable | Default | Description |
|---|---|---|
STUDIO_PORT |
5177 |
Studio HTTP port. |
STUDIO_HOST |
127.0.0.1 |
Studio HTTP host. Keep local when entering keys. |
JSON_CONTRACTS_DIR |
./json-contracts |
Contract folder. |
LLM_PROVIDER |
openai |
Default provider ID shown in the UI. |
LLM_MODEL |
provider default | Generic model fallback. Provider-specific model env vars override this. |
LLM_THINKING |
medium |
off, low, medium, high, xhigh, or auto. |
LLM_API_KEY |
unset | Generic API key fallback for custom/OpenAI-compatible testing. |
LLM_BASE_URL |
provider default | Generic base URL fallback. |
LLM_REQUEST_TIMEOUT_MS |
120000 |
Provider request timeout. |
OPENAI_API_KEY |
unset | OpenAI key. |
ANTHROPIC_API_KEY |
unset | Anthropic key. |
OLLAMA_API_KEY |
unset | Ollama Cloud bearer token. |
OPENROUTER_API_KEY |
unset | OpenRouter key. |
GROQ_API_KEY |
unset | Groq key. |
TOGETHER_API_KEY |
unset | Together AI key. |
FIREWORKS_API_KEY |
unset | Fireworks AI key. |
DEEPSEEK_API_KEY |
unset | DeepSeek key. |
MISTRAL_API_KEY |
unset | Mistral key. |
OLLAMA_BASE_URL |
https://ollama.com/api or provider preset |
Ollama API base URL. Use http://127.0.0.1:11434/api for local Ollama. |
OLLAMA_MODEL |
provider default | Ollama model. |
OLLAMA_THINKING |
medium |
Backward-compatible Ollama thinking default. Supports xhigh. |
OLLAMA_FORMAT |
schema |
schema, json, or none. schema sends the contract schema as Ollama's structured output format. |
The Studio reads the same contract folder as the MCP server.
Fastest UI path:
- Open the Configuration sidebar.
- Paste a folder path into Contract repository folder.
- Click Use folder.
- Add or generate contracts in that folder.
- Click Reload contracts when files change.
This lets you point Studio at your app/tool's own contract repository without restarting.
You can also set the folder before startup:
JSON_CONTRACTS_DIR=/path/to/contracts npm startOn Windows PowerShell:
$env:JSON_CONTRACTS_DIR="C:\path\to\contracts"; npm startThe example server exposes UI routes and wraps the same tool handlers that the MCP stdio server exposes:
GET /api/contracts-dir→ current contract repository folderPOST /api/contracts-dir→ switch contract repository folder without restarting StudioGET /api/contracts→list_contractsGET /api/contracts/:name→read_contractPOST /api/contracts/:name/schema→ validate and save an edited schema back to the selected contract file, then reload contractsPOST /api/contracts/:name/rules→ validate and save edited rules back to the selected contract file, then reload contractsPOST /api/json-contract→get_json_contractPOST /api/edit-contract→get_edit_contractPOST /api/validate→validate_jsonPOST /api/repair-contract→get_repair_contractPOST /api/reload→reload_contracts
The Studio also exposes demo-only LLM routes:
GET /api/llm/providers→ provider presets and thinking levels, without secretsGET /api/llm/config→ backward-compatible alias for/api/llm/providersPOST /api/llm/save-config→ write selected provider settings to the local.envwhen the user opts inPOST /api/llm/generate→ get a JSON contract, including optionalcontext, call the selected provider, parse output, then validatePOST /api/llm/edit→ get an edit contract, includingcurrentJsonand optionalcontext, call the selected provider, parse output, then validatePOST /api/llm/repair→ get a repair contract, call the selected provider, parse output, then validatePOST /api/contract-drafts/generate→ use the selected provider to draft a new contractPOST /api/contract-drafts/validate→ validate a generated contract draft without savingPOST /api/contract-drafts/save→ validate, write{contractRepository}/{contractName}.json, and reload contracts
These HTTP endpoints are only for the demo Studio. The production MCP server remains the stdio server started by json-contracts.