qrud exists to solve a very practical problem: getting an HTTP API online in minutes, with real CRUD semantics, persistent storage, and enough flexibility to evolve without the weight of a full backend on day one.
That is what makes qrud so strong. It is not just a mock server. It gives product teams, frontend teams, and integration flows an incredibly efficient starting point: it boots fast, stores real JSON data, isolates data by workspace, accepts an OpenAPI contract, exposes observability, and runs on either SQLite or Postgres. It is the kind of tool that removes friction and gives development speed back to the team.
- Ready-to-use HTTP API with CRUD semantics.
- Persistence with SQLite or Postgres.
- Multi-tenant model through
workspace, with support forx-workspace-id. - Automatic creation of the
defaultworkspace. --use-defaultto simplify local environments and demos.- Support for any JSON payload.
- Collection listing with
term,limit,offset,order, andby. - Optional OpenAPI contract validation for routes and payloads.
GET /openapi.jsonto inspect the active contract.- CORS configurable through CLI flags or environment variables.
- Optional OpenTelemetry tracing.
- Configuration through flags or
QRUD_*environment variables.
The current Dockerfile packages a prebuilt binary. That means the correct flow is:
- build the
releasebinary; - build the image;
- run the container and configure
qrudthrough environment variables.
Build the binary:
cargo build --releaseBuild the image:
docker build -t qrud .Run with in-memory SQLite:
docker run --rm -p 3000:3000 \
-e QRUD_SQLITE=:memory: \
-e QRUD_USE_DEFAULT=true \
qrudIf you want file persistence:
docker run --rm -p 3000:3000 \
-e QRUD_SQLITE=/data/qrud.db \
-e QRUD_USE_DEFAULT=true \
-v "$(pwd)/data:/data" \
qrudIf you want Postgres:
docker run --rm -p 3000:3000 \
-e QRUD_POSTGRES='postgres://user:pass@host.docker.internal:5432/qrud' \
qrudThere is also Dockerfile.artifact, designed for setups where the binary is already separated into artifacts/<arch>/<app_name>:
mkdir -p artifacts/amd64
cp target/release/qrud artifacts/amd64/qrud
docker build -f Dockerfile.artifact \
--build-arg TARGETARCH=amd64 \
--build-arg APP_NAME=qrud \
-t qrud:artifact .
docker run --rm -p 3000:3000 \
-e QRUD_SQLITE=:memory: \
qrud:artifactIf you prefer to run it without cargo run, build the binary and place it on your PATH.
Build the executable:
cargo build --releaseInstall it for the current user:
mkdir -p ~/.local/bin
cp target/release/qrud ~/.local/bin/qrud
chmod +x ~/.local/bin/qrudRun the binary:
qrud --port 3000 --sqliteOr with a file-based database:
qrud --port 3000 --sqlite ./qrud.db --use-defaultOr with Postgres:
qrud --port 3000 --postgres "postgres://user:pass@localhost:5432/qrud"For local development, this is the most direct path.
In-memory SQLite:
cargo run -- --port 3000 --sqliteIn-memory SQLite with automatic default workspace:
cargo run -- --port 3000 --sqlite --use-defaultSQLite in a file:
cargo run -- --port 3000 --sqlite ./qrud.dbPostgres:
cargo run -- --port 3000 --postgres "postgres://user:pass@localhost:5432/qrud"CORS configured through CLI:
cargo run -- --port 3000 --cors \
--cors-origin http://localhost:5173 \
--cors-method GET,POST,PUT,PATCH,DELETE,OPTIONS \
--cors-header content-type,x-workspace-id \
--cors-credentials trueAllow all CORS:
cargo run -- --port 3000 --cors-allowOpenTelemetry:
cargo run -- --port 3000 --otel \
--otel-protocol grpc \
--otel-endpoint http://localhost:4317 \
--otel-service-name qrud \
--otel-service-version 0.1.0 \
--otel-tracer-name qrud-server \
--otel-sampler parentbased_traceidratio \
--otel-sampler-arg 0.25Health check:
curl http://localhost:3000/healthCreate a document:
curl -X POST http://localhost:3000/users \
-H 'Content-Type: application/json' \
-H 'x-workspace-id: default' \
-d '{"name":"Ana","role":"admin"}'List documents. Collection listings return up to 20 items by default; pass limit when you need a different page size:
curl "http://localhost:3000/users?limit=10&offset=0" \
-H 'x-workspace-id: default'Upsert with PUT and an id in the path:
curl -X PUT http://localhost:3000/users/7b3a4b2f-5a7e-4a3f-9f4e-8e6a2b0f8e11 \
-H 'Content-Type: application/json' \
-H 'x-workspace-id: default' \
-d '{"name":"Bea"}'Every CLI flag can also be configured through a QRUD_* environment variable. When both are present, CLI wins.
QRUD_HOST: bind host. Default is0.0.0.0.QRUD_PORT: HTTP port. Default is3000.QRUD_SQLITE: SQLite path or:memory:.QRUD_POSTGRES: Postgres connection URL.QRUD_USE_DEFAULT: enables thedefaultworkspace automatically.QRUD_SCHEMA: OpenAPI contract from file, URL, inline JSON/YAML, or Base64.QRUD_CORS: enables CORS.QRUD_CORS_ALLOW: allows origins, methods, and headers with*.QRUD_CORS_ORIGINS: comma-separated list.QRUD_CORS_METHODS: comma-separated list.QRUD_CORS_HEADERS: comma-separated list.QRUD_CORS_CREDENTIALS: enablesAccess-Control-Allow-Credentials.QRUD_OTEL: enables OpenTelemetry.QRUD_OTEL_ENDPOINT: OTLP endpoint.QRUD_OTEL_PROTOCOL:grpcorhttp.QRUD_OTEL_SERVICE_NAME: service name.QRUD_OTEL_SERVICE_VERSION: reported version.QRUD_OTEL_TRACER_NAME: tracer name.QRUD_OTEL_SAMPLER: sampling strategy.QRUD_OTEL_SAMPLER_ARG: numeric sampler argument.
Example:
export QRUD_HOST=127.0.0.1
export QRUD_PORT=8080
export QRUD_SQLITE=./data.db
export QRUD_USE_DEFAULT=true
export QRUD_CORS_ALLOW=true
export QRUD_OTEL=true
export QRUD_OTEL_PROTOCOL=grpc
export QRUD_OTEL_ENDPOINT=http://localhost:4317
cargo runqrud can start with an OpenAPI contract and use that contract to validate routes and payloads. That is especially useful when the goal is to simulate an API with more discipline without giving up speed.
Inspect the active contract:
curl http://localhost:3000/openapi.jsonLoad a contract at runtime:
curl -X PUT http://localhost:3000/openapi/contract \
-H 'Content-Type: application/json' \
-d '{"openapi":"3.0.3","info":{"title":"demo","version":"1.0.0"},"paths":{}}'Remove the active contract:
curl -X DELETE http://localhost:3000/openapi/contractStart with a local file contract:
cargo run -- --port 3000 --sqlite --schema ./example.yamlWith a remote URL:
cargo run -- --port 3000 --sqlite --schema https://example.com/openapi.jsonWith inline JSON:
cargo run -- --port 3000 --sqlite --schema '{"openapi":"3.0.3","info":{"title":"x","version":"1"},"paths":{}}'With Base64:
SCHEMA_B64=$(printf '%s' '{"openapi":"3.0.3","info":{"title":"x","version":"1"},"paths":{}}' | base64)
cargo run -- --port 3000 --sqlite --schema "$SCHEMA_B64"Only local #/ references are supported inside the contract.
A workspace is the data namespace. Its name must be unique and use dash-case. When the database is empty, default is created automatically. With --use-default, the x-workspace-id header becomes optional.
A document is any JSON value stored under a path key (pk), such as /users, /orders/2024, or any other structure that fits your domain.
Main endpoints:
GET /health: returns200 OK.GET /info: reports details about the connected database.GET /openapi.json: returns the current specification.POST /workspaces: creates a workspace.GET /workspaces: lists active workspaces.GET /workspaces/{workspace}: fetches a workspace.PUT /workspaces/{workspace}: updates name and description.PATCH /workspaces/{workspace}: partially updates it.DELETE /workspaces/{workspace}: performs a soft delete.
Document routes work in two formats:
- Header-based:
/{*pk}withx-workspace-id: <workspace>. - Path-based:
/workspaces/{workspace}/{*pk}.
Main rules:
POSTcreates a document and ignoresidin the payload.PUTperforms an upsert. If the path ends with a UUID, that UUID becomes the document id.PATCHperforms a shallow merge at the root level and requires a JSON object.DELETEreturns204when the document exists and404when it does not.
Collection listings support:
term: case-insensitive search.limitandoffset: pagination.limitdefaults to20; pass?limit=...for a different page size.order:ascordesc.by:created_atorupdated_at.
Every returned document includes:
$id$createdAt$updatedAt$deletedAt, when presentvalue, when the stored payload is not an object
To debug requests and responses in more detail:
RUST_LOG=debug cargo run -- --port 3000 --sqliteThis project is licensed under the MIT License. See LICENSE.