Build, run, and monitor AI/ML workflows on Jetty from any AI coding tool. Works with Claude Code, Cursor, VS Code Copilot, Windsurf, Zed, Gemini CLI, Codex CLI, and any MCP-compatible agent.
claude plugin marketplace add jettyio/agent-skill
claude plugin install jetty@jettyThen run /jetty-setup to create an account, configure your API key, and run your first workflow in under 5 minutes.
Jetty uses the Model Context Protocol (MCP) to connect to your agent. Pick your tool below.
Plugin (recommended) — includes guided setup wizard, workflow skills, and MCP tools:
claude plugin marketplace add jettyio/agent-skill
claude plugin install jetty@jettyThen run /jetty-setup to get started interactively.
MCP server only:
claude mcp add jetty -- npx -y jetty-mcp-serverOr add to your project's .mcp.json:
{
"mcpServers": {
"jetty": {
"command": "npx",
"args": ["-y", "jetty-mcp-server"],
"env": { "JETTY_API_TOKEN": "mlc_your_token" }
}
}
}Add to .cursor/mcp.json in your project root:
{
"mcpServers": {
"jetty": {
"command": "npx",
"args": ["-y", "jetty-mcp-server"],
"env": { "JETTY_API_TOKEN": "mlc_your_token" }
}
}
}Add to .vscode/mcp.json in your project root:
{
"servers": {
"jetty": {
"command": "npx",
"args": ["-y", "jetty-mcp-server"],
"env": { "JETTY_API_TOKEN": "mlc_your_token" }
}
}
}Or run MCP: Add Server from the Command Palette.
Add to ~/.codeium/windsurf/mcp_config.json:
{
"mcpServers": {
"jetty": {
"command": "npx",
"args": ["-y", "jetty-mcp-server"],
"env": { "JETTY_API_TOKEN": "mlc_your_token" }
}
}
}Add to your Zed settings (~/.config/zed/settings.json):
{
"context_servers": {
"jetty": {
"command": {
"path": "npx",
"args": ["-y", "jetty-mcp-server"],
"env": { "JETTY_API_TOKEN": "mlc_your_token" }
}
}
}
}gemini extensions install https://github.com/jettyio/agent-skillDuring installation, you'll be prompted for your Jetty API token. The extension registers the MCP server and loads context automatically.
To install from a local clone instead:
gemini extensions install --path /path/to/agent-skillAdd to ~/.codex/config.json:
{
"mcpServers": {
"jetty": {
"command": "npx",
"args": ["-y", "jetty-mcp-server"],
"env": { "JETTY_API_TOKEN": "mlc_your_token" }
}
}
}JETTY_API_TOKEN=mlc_your_token npx -y jetty-mcp-serverThe server communicates over stdio using the MCP protocol.
- Sign up at flows.jetty.io
- Go to Settings → API Tokens
- Create a token (starts with
mlc_) - Add it to your tool's config as shown above
Once connected, ask your agent to help you get started. This works in any MCP-connected tool — just paste the prompt below into your agent's chat:
Set up Jetty for me. List my collections, then deploy the cute-feline-detector demo workflow using the
create-tasktool with this workflow JSON. Then run it withrun-workflowusing the prompt "a fluffy orange tabby cat sitting in a sunbeam". Polllist-trajectoriesuntil it completes, then show me the results withget-trajectory.
Before running the demo, store your AI provider key in your collection's environment variables. Ask your agent:
Use the Jetty
get-collectiontool to check my collection's environment variables. I need to add my OpenAI API key (or Gemini API key) so workflows can use it.
Claude Code users: Just run /jetty-setup instead — the guided wizard handles all of this automatically.
Once connected, your agent has access to 14 tools:
| Tool | Description |
|---|---|
list-collections |
List all collections (workspaces) |
get-collection |
Get collection details and environment variable keys |
list-tasks |
List tasks (workflows) in a collection |
get-task |
Get task details and workflow definition |
create-task |
Create a new task with a workflow |
update-task |
Update a task's workflow or description |
run-workflow |
Run a workflow asynchronously |
run-workflow-sync |
Run a workflow synchronously (blocks until done) |
list-trajectories |
List recent workflow runs |
get-trajectory |
Get full run details with step outputs |
get-stats |
Get execution statistics |
add-label |
Label a trajectory (e.g., quality=high) |
list-step-templates |
List available step templates |
get-step-template |
Get template details and schema |
The plugin adds two skills for richer Claude Code integration:
Interactive wizard that handles account creation, API key storage, provider selection (OpenAI or Gemini), and runs a demo workflow — all in under 5 minutes.
/jetty list collections
/jetty list tasks in my-project
/jetty run my-project/my-task with prompt="Hello, world!"
/jetty show the last trajectory for my-project/my-task
/jetty create a task called test-echo in my-project using text_echo
/jetty add label quality=high to trajectory abc123 in my-project/my-task
Ready-to-use templates are in skills/jetty/templates/:
| Template | Description |
|---|---|
| cute-feline-detector-openai | Prompt → DALL-E 3 image → GPT-4o cuteness judge |
| cute-feline-detector-gemini | Prompt → Gemini image → Gemini Flash cuteness judge |
| simple-chat | Basic LLM chat with system prompt |
| model-comparison | Compare two LLM responses with an AI judge |
| image-generation | Text-to-image with Replicate/FLUX |
| batch-processor | Fan-out parallel processing |
| document-summarizer | Configurable document summarization |
Use the create-task MCP tool to deploy any template to your collection.
For direct terminal usage without any AI tool:
export JETTY_API_TOKEN="mlc_your_token_here"
source path/to/skills/jetty/jetty-cli.sh
jetty_health # Check connectivity
jetty_collections # List collections
jetty_run_sync my-project my-task '{"prompt": "Hello"}' # Run a workflow
jetty_trajectories my-project my-task # View execution history
jetty_help # Full command referenceJetty runs AI/ML workflows defined as JSON pipelines. Each workflow has:
- init_params — Input parameters (e.g., a prompt)
- step_configs — Pipeline steps (e.g., LLM call → image generation → judge)
- steps — Execution order
Results are stored as trajectories with full step-by-step outputs, downloadable files, and labeling support.
| Service | URL | Purpose |
|---|---|---|
| Flows API | flows-api.jetty.io |
Run workflows, trajectories, files |
| Dock API | dock.jetty.io |
Collections, tasks, datasets |
| Web UI | flows.jetty.io |
Dashboard and management |
- Node.js 18+ (for the MCP server via
npx) - A Jetty API token (get one here)
- An OpenAI or Google Gemini API key (for image generation workflows)
| Problem | Solution |
|---|---|
| "Invalid or expired token" | Regenerate at flows.jetty.io → Settings → API Tokens |
| "Access denied" | Verify your token has access to the collection |
| MCP tools not showing up | Restart your editor/agent after config changes |
| Workflow fails | Use get-trajectory to inspect step-by-step outputs |
/jetty-setup not found |
Claude Code only — reinstall: claude plugin marketplace add jettyio/agent-skill && claude plugin install jetty@jetty |
MIT — see LICENSE for details.