OpenAPI-first SSH control plane for AI agents and Linux VPS automation.
Give any agent a structured HTTP layer for commands, file transfer, provisioning, tunnels, and server administration instead of brittle raw terminal SSH.
Join the Discord server to get updates, ask questions, and get a free API key for Free AI.
Guide | Setup Prompt | Prompt Library | AGENTS.md | llms.txt
Most AI agents are much better at using HTTP and OpenAPI than they are at improvising over raw SSH.
SSH ~ Api exists to give agents a clean, predictable layer for remote Linux work:
- connect to a VPS or other Linux machine
- upload, download, read, and write files
- run commands and read output
- inspect logs, processes, services, and system state
- perform provisioning and repeatable server operations
- automate both simple tasks and multi-step workflows through a stable API
This is not just "SSH over HTTP." It is a structured server-operations layer designed so agents can discover capabilities quickly and use them consistently.
| Raw SSH | SSH ~ Api |
|---|---|
| agent has to invent its own workflow | agent follows documented endpoints |
| output handling is ad hoc | output is structured JSON |
| file movement is custom logic | file operations are first-class routes |
| session state is implicit | sessions are explicit and trackable |
| distro assumptions are brittle | setup and service behavior are distro-aware where possible |
| hard to hand off between tools | OpenAPI makes the surface portable across agents |
- connect to a VPS and inspect the server state
- deploy application files over SFTP
- run install, build, restart, and debug commands
- read logs and diagnose failures
- manage services, environment variables, and cron jobs
- create archives and backups
- provision core packages on new Linux hosts
- open a tunnel to reach an internal service during debugging
- perform common day-2 operations without custom terminal glue
powershell -ExecutionPolicy Bypass -File .\scripts\bootstrap.ps1./scripts/bootstrap.shpip install -r requirements.txt
cp config.example.json config.json
python run.pypip install .
ssh-apidocker compose up --buildAfter startup, the main endpoints are:
- Swagger UI:
http://localhost:8754/docs - ReDoc:
http://localhost:8754/redoc - OpenAPI JSON:
http://localhost:8754/openapi.json
If your local config.json changes the port, use that port instead.
Use the full version in AGENT_SETUP_PROMPT.md. The short version is below:
Set up SSH ~ Api in this repository.
Goals:
- create a local virtual environment
- install dependencies
- create config.json from config.example.json if it does not exist
- start the API locally
- report the base URL, /docs URL, /redoc URL, and /openapi.json URL
Rules:
- do not commit config.json
- do not place real credentials into tracked files
- if the API is already running, verify it instead of starting a duplicate copy
- after startup, fetch /openapi.json and read SSH_API_GUIDE.md so you understand the API surface
Preferred setup flow:
- on Windows, run scripts/bootstrap.ps1
- on macOS/Linux, run scripts/bootstrap.sh
- if those are unavailable, do the setup manually
When finished:
- tell me exactly how to stop the server
- tell me which host/port it is running on
- summarize the next step for using it against a VPS
- The agent fetches
/openapi.json. - The agent reads
SSH_API_GUIDE.md. - The agent creates a session with
POST /session/connect. - The agent stores the returned
session_id. - The agent uses the structured command, file, system, setup, tunnel, and firewall routes instead of improvising raw SSH behavior.
- The agent disconnects the session when the task is complete.
Use these in this order:
/openapi.jsonfor the authoritative machine-readable contractSSH_API_GUIDE.mdfor workflow, conventions, and route guidanceAGENT_SETUP_PROMPT.mdfor copy-paste installation instructionsAGENT_PROMPTS.mdfor reusable server automation promptsAGENTS.mdfor repository-level agent contextllms.txtandllms-full.txtfor public agent-readable indexing
- Deploy a new app release to a VPS
- Read logs and diagnose a failing service
- Bootstrap a fresh Linux server with common packages
- Upload config files and restart services
- Create an archive backup before making changes
- Search the filesystem and inspect environment variables
- Open a tunnel to reach an internal service during debugging
Reusable prompts for those workflows are in AGENT_PROMPTS.md.
This project started with Debian and Ubuntu testing.
It now detects the remote platform and adapts where possible for:
- Debian and Ubuntu
- RHEL, CentOS, Rocky, AlmaLinux, Fedora
- SUSE and openSUSE
- Alpine
- Arch-based systems
The strongest compatibility areas are session handling, command execution, file transfer, and general SSH behavior. Package-management and firewall operations are distro-aware and best-effort where the required tools exist on the target host.
This API is powerful. Treat it like remote shell access.
- It has no built-in HTTP authentication.
- Put it behind a reverse proxy, VPN, IP allowlist, or another access-control layer before exposing it anywhere real.
- Keep
config.jsonlocal and ignored by git. - The
/configendpoints can expose or modify locally stored default SSH credentials. - Use least-privilege SSH accounts whenever possible.
- Treat firewall and destructive system routes as privileged operations.
README.md: public overview and positioningSSH_API_GUIDE.md: full API workflow and endpoint guideAGENT_SETUP_PROMPT.md: copy-paste setup prompt for agentsAGENT_PROMPTS.md: reusable prompts for VPS workflowsAGENTS.md: repo-wide agent contextllms.txt: public agent-readable indexllms-full.txt: expanded agent-readable contextconfig.example.json: safe public config templateCONTRIBUTING.md: contribution flowSECURITY.md: security reporting guidanceCHANGELOG.md: project change logROADMAP.md: near-term roadmap
If you publish this on GitHub, the clean handoff is:
- humans start with
README.md - agents start with
/openapi.json - both should read
SSH_API_GUIDE.md - coding agents working inside the repo should read
AGENTS.md
That is the intended way to use this project.
Get a key through Discord, point your client at the base URL, and start building.
Free AI is a public OpenAI-compatible API for builders who want working model access without the usual friction.
- No credit card
- No daily limit
- No prompt storage
- One Discord slash command to get started
If you can use the OpenAI SDK, you can use this API.
- Discord invite:
https://discord.gg/rG3SYpeqYF - Vanity invite:
https://discord.gg/secrets
- Join the Discord server.
- Run
/signup. - Copy your key immediately.
If you lose it later:
- run
/resetkey - get a brand new key
- keep the same usage totals and account stats
30 requests per minuteNo daily limit
The per-minute cap exists so everyone gets a fair chance to use it.
Prompt text and completion text are not stored.
Only request metadata is kept:
- model id
- input token count
- output token count
- request timestamp
- request status
- source IP
https://api.freetheai.xyz
| Route | Method | Description |
|---|---|---|
/v1/health |
GET |
Health check |
/v1/models |
GET |
Current model list |
/v1/chat/completions |
POST |
OpenAI-compatible chat completions |
Auth header:
Authorization: Bearer YOUR_API_KEYcurl https://api.freetheai.xyz/v1/models \
-H "Authorization: Bearer YOUR_API_KEY"curl https://api.freetheai.xyz/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "glm/glm-5.1",
"messages": [
{
"role": "user",
"content": "Write a hello world in Python."
}
]
}'from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.freetheai.xyz/v1",
)
completion = client.chat.completions.create(
model="glm/glm-5.1",
messages=[
{"role": "user", "content": "Explain recursion in one paragraph."}
],
)
print(completion.choices[0].message.content)import OpenAI from "openai";
const client = new OpenAI({
apiKey: "YOUR_API_KEY",
baseURL: "https://api.freetheai.xyz/v1"
});
const completion = await client.chat.completions.create({
model: "glm/glm-5.1",
messages: [
{ role: "user", content: "Say hello in one sentence." }
]
});
console.log(completion.choices[0].message.content);const response = await fetch("https://api.freetheai.xyz/v1/chat/completions", {
method: "POST",
headers: {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
body: JSON.stringify({
model: "or/openai/gpt-oss-20b:free",
messages: [
{ role: "user", content: "Summarize recursion in simple terms." }
]
})
});
const data = await response.json();
console.log(data);Use the exact ids returned by GET /v1/models.
Current model families:
glm/*kai/*opc/*or/*
Live site:
https://freetheai.xyz/