latlng is an open source geospatial object engine written in Rust. The repository contains a portable core, native server transports, pluggable storage backends, geofence and webhook eventing, a TypeScript SDK package, and a public browser wasm package built on the same engine.
Implemented:
- portable Rust core for object storage, spatial indexing, search, metadata fields, and JSON subdocument updates
INTERSECTSsupports real geometry clipping forOBJECTSoutput; non-object outputs ignoreclip, and clipped results may normalize to GeoJSON- native HTTP/JSON server with shared auth, runtime config rewrite, Prometheus metrics, generated OpenAPI v3 docs, and admin endpoints
- native WebSocket command/event transport with header auth, in-band
auth, and async subscription streaming - native async Cap'n Proto RPC using generated schema bindings and
capnp-rpc - native single-leader follower replication over Cap'n Proto streaming with checksum-based resume/resync
- in-memory, append-only-file, and SQLite storage backends
- geofencing with static and roaming geofences,
NODWELL, channel subscriptions, and durable HTTP POST webhook delivery - wasm bindings for
latlng-coreand a browser Web Worker package for in-browser demos and local geospatial workloads
Intentional scope boundaries:
- the CLI covers common query and admin flows, but it is still not a full command-for-command shell
- replication is native-only and intentionally scoped to single-leader follower mode rather than broader clustering/consensus
The same engine is used in three shapes:
- Embedded Rust library: call
latlng-coredirectly with any storage backend. - Native server: expose the engine through HTTP, WebSocket, and Cap'n Proto.
- Browser wasm package: compile the portable crates to
wasm32-unknown-unknownand run the in-memory engine behind a browser Web Worker API.
The native server keeps Arc<LatLngNative<S>> directly, routes request-style synchronous core work through a dedicated bounded native executor, and relies on a portable global control gate plus per-collection cells inside latlng-core. In practice that gives native parallel reads, steady-state collection-local concurrency, and explicit backpressure for core request execution without changing the single-threaded wasm behavior model.
At a high level, requests flow like this:
HTTP / WebSocket / Cap'n Proto / browser wasm API
|
v
transport adapter
|
v
latlng-core
/ | \
v v v
latlng-index latlng-geofence latlng-storage
| / | \
v v v v
latlng-geo memory AOF SQLite
Mutation events flow separately:
latlng-core mutation
|
v
primary log + geofence registry
/ | \
v v v
channel subs WS/Capnp durable webhook outbox
live only streams -> SQLite queue -> HTTP POST
The detailed architecture is documented in docs/architecture.md. Configuration, persistence, and release notes live in docs/config.md, docs/persistence.md, and docs/release-checklist.md.
crates/latlng-auth: shared bearer/JWT validation used by HTTP, WebSocket, and Cap'n Protocrates/latlng-config: runtime config model plus JSON/TOML load/save helperscrates/latlng-platform: portable lock and mailbox abstractions for native and wasm buildscrates/latlng-geo: geometry types, bounding boxes, geohash helpers, and JSON path utilitiescrates/latlng-index: spatial index plus filtering, sorting, and output shapingcrates/latlng-storage: backend trait plus shared persistence contractscrates/latlng-core: command engine, collection lifecycle, geofence registration, and server info/configcrates/latlng-geofence: geofence matching, roaming state, subscriptions, and event generationcrates/latlng-storage-memory: in-memory backendcrates/latlng-storage-aof: append-only file backend with compaction, integrity, backup, and restore supportcrates/latlng-storage-sqlite: SQLite backend for embedded/native usecrates/latlng-webhook-queue: SQLite-backed durable webhook queue materialized from the primary logcrates/latlng-schema: Cap'n Proto schema plus generated Rust bindingscrates/latlng-capnp: async Cap'n Proto RPC transportcrates/latlng-http: HTTP/JSON transport built onaxumcrates/latlng-ws: WebSocket event transportcrates/latlng-endpoints: webhook delivery helperscrates/latlng-replication: follower state, replication client/coordinator, and checksum/chunk helperscrates/latlng-server: runnable native server binarytools/latlng-cli: operational CLI for common query and admin flowstools/latlng-benchmark: benchmark harness for writes, queries, geofences, and webhook deliverytools/latlng-server-benchmark: black-box localhost benchmark harness for the reallatlng-serverprocesspackages/sdk: TypeScript SDK for the HTTP and WebSocket server surfacespackages/wasm: public browser-only Web Worker package around the wasm core for demos and local in-browser geospatial workloadspackages/example-wasm: static Vite site showcasing@latlng/wasmfor Cloudflare Pages
Start the native server:
cargo run -p latlng-serverBy default it listens on:
- HTTP:
127.0.0.1:7421
Cap'n Proto is disabled by default. Enable it when native Cap'n Proto clients or leader/follower replication are needed:
cargo run -p latlng-server -- --capnp-enabled=trueThe native server also supports JSON or TOML config files:
cargo run -p latlng-server -- --config ./latlng.jsonThe canonical generated OpenAPI v3 document for the native HTTP server is available at:
GET /api-docs
It describes the stable native HTTP surface with typed request and response schemas. Diagnostic and replication-management routes are intentionally not part of the stable public API document. Release builds also attach the same generated document as openapi.json; locally it can be generated with latlng-server --print-openapi or make openapi.
The subscriber mailbox used for channel, WebSocket, Cap'n Proto, and webhook event delivery defaults to 4096 events per subscriber. You can override it with:
LATLNG_SUBSCRIBER_QUEUE_CAPACITY=8192 cargo run -p latlng-server
cargo run -p latlng-server -- --subscriber-queue-capacity 8192The dedicated native core executor defaults to one worker per available CPU and a bounded queue sized at threads * 64. You can override it with:
LATLNG_NATIVE_EXECUTOR_THREADS=8 LATLNG_NATIVE_EXECUTOR_QUEUE_LIMIT=512 cargo run -p latlng-server
cargo run -p latlng-server -- --native-executor-threads 8 --native-executor-queue-limit 512Webhook HTTP delivery uses a per-request timeout that defaults to 5000ms. You can override it with:
LATLNG_WEBHOOK_TIMEOUT_MS=10000 cargo run -p latlng-server
cargo run -p latlng-server -- --webhook-timeout-ms 10000Webhook delivery concurrency is bounded and defaults to 128 in-flight HTTP deliveries. You can override it with:
LATLNG_WEBHOOK_CONCURRENCY_LIMIT=256 cargo run -p latlng-server
cargo run -p latlng-server -- --webhook-concurrency-limit 256Durable webhook delivery also has queue and retry settings:
LATLNG_WEBHOOK_QUEUE_PATH=./data/webhooks.sqlite cargo run -p latlng-server
LATLNG_WEBHOOK_RETRY_COUNT=8 cargo run -p latlng-server
LATLNG_WEBHOOK_RETRY_INITIAL_BACKOFF_MS=200 cargo run -p latlng-server
LATLNG_WEBHOOK_RETRY_MAX_BACKOFF_MS=30000 cargo run -p latlng-server
LATLNG_WEBHOOK_LEASE_MS=30000 cargo run -p latlng-serverStore and query a point through the CLI:
cargo run -p latlng-cli -- --base-url http://127.0.0.1:7421 set-point fleet truck-1 52.52 13.405
cargo run -p latlng-cli -- --base-url http://127.0.0.1:7421 get fleet truck-1
cargo run -p latlng-cli -- --base-url http://127.0.0.1:7421 nearby fleet 52.52 13.405 500
cargo run -p latlng-cli -- collection-create fleet
cargo run -p latlng-cli -- fset fleet truck-1 speed 42
cargo run -p latlng-cli -- fget fleet truck-1 speed
cargo run -p latlng-cli -- expire fleet truck-1 300
cargo run -p latlng-cli -- ttl fleet truck-1
cargo run -p latlng-cli -- jset fleet truck-1 properties.status active
cargo run -p latlng-cli -- jget fleet truck-1 properties.status
cargo run -p latlng-cli -- del fleet truck-1
cargo run -p latlng-cli -- timeout set 1.5
cargo run -p latlng-cli -- readonly yes
cargo run -p latlng-cli -- config-rewriteHook and channel geofences can be created from GeoJSON files. The file may include properties.collection, properties.detect, properties.commands, and properties.mode; otherwise pass --collection, --detect, --commands, or --mode on the CLI.
cargo run -p latlng-cli -- hook-set fleet-hook https://example.com/hook --geojson ./geofence.geojson --collection fleet
cargo run -p latlng-cli -- hooks
cargo run -p latlng-cli -- hook-get fleet-hook
cargo run -p latlng-cli -- channel-set fleet-channel --geojson ./geofence.geojson --collection fleet
cargo run -p latlng-cli -- channels
cargo run -p latlng-cli -- channel-del fleet-channelInspect and maintain an offline AOF file:
cargo run -p latlng-cli -- aof-verify ./data/appendonly.aof
cargo run -p latlng-cli -- aof-backup ./data/appendonly.aof ./backup/appendonly.backup.json
cargo run -p latlng-cli -- aof-restore ./backup/appendonly.backup.json ./restore/appendonly.aofOr use plain HTTP:
curl -sS -X POST http://127.0.0.1:7421/collections/fleet/objects/truck-1 \
-H 'content-type: application/json' \
-d '{"object":{"Point":{"lat":52.52,"lon":13.405,"z":null}}}'
curl -sS -X POST http://127.0.0.1:7421/collections/fleet/search/nearby \
-H 'content-type: application/json' \
-d '{"lat":52.52,"lon":13.405,"meters":500,"options":{}}'Or use the TypeScript SDK:
cd packages/sdk
npm install
npm run buildimport { LatLngClient, point } from "@latlng/sdk";
const client = new LatLngClient({
leaderUrl: "http://127.0.0.1:7421",
token: "dev-token",
});
await client.setPoint("fleet", "truck-1", { lat: 52.52, lon: 13.405 });
const object = await client.get("fleet", "truck-1");
const nearby = await client.nearby("fleet", {
lat: 52.52,
lon: 13.405,
meters: 500,
});The repository includes:
- a multi-stage production
Dockerfile - a single-node docker-compose.yml
- a leader/follower docker-compose.replication.yml
- sample mounted configs under examples/docker
The published image is config-file driven and starts with:
latlng-server --config /etc/latlng/latlng.toml.
Container contract:
| Purpose | Container value | Notes |
|---|---|---|
| HTTP, WebSocket, metrics, and API traffic | port 7421 |
publish with -p 7421:7421 |
| Cap'n Proto RPC and replication traffic | port 7422 |
publish only when capnp_enabled = true or replication clients need host access |
| Default config path | /etc/latlng/latlng.toml |
mount TOML or JSON config here, or override the command / LATLNG_CONFIG |
| Persistent data path | /var/lib/latlng |
mount this when using AOF persistence or the durable webhook queue |
| Runtime user | latlng |
the image runs as a non-root user |
Container configs should bind to 0.0.0.0, not 127.0.0.1, when the port must be
reachable through Docker port publishing. The sample configs already do this.
Build the image:
docker build -t latlng-server .Published release images are available from Docker Hub as tobilg/latlng:<tag>,
for example tobilg/latlng:v0.1.0.
Run a single node from the published image with a mounted config file and persistent data volume:
docker run --rm \
--name latlng \
-p 7421:7421 \
-p 7422:7422 \
-v "$(pwd)/examples/docker/single-node.toml:/etc/latlng/latlng.toml:ro" \
-v latlng-data:/var/lib/latlng \
tobilg/latlng:v0.1.0The bundled single-node example config uses:
- AOF:
/var/lib/latlng/appendonly.aof - webhook queue:
/var/lib/latlng/webhook-queue.sqlite - bearer token:
dev-token
Check the HTTP endpoint with the sample bearer token:
curl -sS -H "Authorization: Bearer dev-token" http://127.0.0.1:7421/pingFor an HTTP-only container, omit -p 7422:7422 and set capnp_enabled = false
in the mounted config.
Single-node compose:
docker compose up --buildLeader/follower compose:
docker compose -f docker-compose.replication.yml up --buildThat brings up:
- leader HTTP on
127.0.0.1:7421 - leader Cap'n Proto on
127.0.0.1:7422 - follower HTTP on
127.0.0.1:17421 - follower Cap'n Proto on
127.0.0.1:17422
The follower example config follows the leader through Docker DNS using:
follow_host = "latlng-leader"follow_port = 7422replication_credential = "replication-secret"
If you want to mount a different config path, either:
- override the command:
docker run ... latlng-server --config /some/other/path.toml - or set
LATLNG_CONFIG=/some/other/path.toml
Config precedence is unchanged in containers:
- defaults
- config file
- environment variables
- CLI flags
latlng-server reads JSON or TOML config files via --config or LATLNG_CONFIG.
The complete server config option set is:
| Name | Default | Description |
|---|---|---|
production_mode |
false |
Enables strict production startup guardrails. |
listen_addr |
"127.0.0.1:7421" |
HTTP listen address. |
capnp_enabled |
false |
Enables the Cap'n Proto RPC and replication listener. |
capnp_listen_addr |
"127.0.0.1:7422" |
Cap'n Proto listen address. |
server_id |
"<generated uuid>" |
Stable server identity used in replication status. |
storage |
"memory" |
Storage backend. Use memory or aof with a path. |
read_only |
false |
Rejects mutating commands when true. |
command_timeouts |
{} |
Per-command timeout overrides in seconds. |
subscriber_queue_capacity |
4096 |
Per-subscriber event queue capacity. |
webhook_queue_path |
null |
SQLite webhook queue path. Defaults near the AOF or current directory. |
webhook_timeout_ms |
5000 |
HTTP timeout for webhook deliveries. |
webhook_concurrency_limit |
128 |
Maximum concurrent webhook delivery attempts. |
webhook_retry_count |
8 |
Maximum webhook retry attempts before dead-lettering. |
webhook_retry_initial_backoff_ms |
200 |
Initial webhook retry backoff. |
webhook_retry_max_backoff_ms |
30000 |
Maximum webhook retry backoff. |
webhook_lease_ms |
30000 |
Webhook job lease duration. |
native_executor_threads |
<available CPU parallelism> |
Native worker thread count for core operations. |
native_executor_queue_limit |
<native_executor_threads * 64> |
Native executor queue limit. |
aof_writer_queue_limit |
4096 |
AOF writer queue limit. |
aof_group_commit_delay_ms |
1 |
Maximum AOF group commit delay. |
aof_group_commit_max_requests |
128 |
Maximum requests per AOF commit cycle. |
follow_host |
null |
Leader host for follower replication. |
follow_port |
null |
Leader Cap'n Proto port for follower replication. |
replication_credential |
null |
Dedicated credential for replication streams. |
replication_batch_size |
512 |
Maximum entries per replication stream response. |
replication_reconnect_backoff_ms |
1000 |
Follower reconnect backoff after failures. |
http_cors_enabled |
false |
Enables HTTP CORS middleware. |
http_cors_allowed_origins |
[] |
Allowed CORS origins. Avoid * with auth. |
http_cors_allowed_methods |
["GET","POST","PUT","DELETE","OPTIONS"] |
Allowed CORS methods. |
http_cors_allowed_headers |
["authorization","content-type","x-request-id"] |
Allowed CORS headers. |
http_cors_max_age_seconds |
null |
Optional CORS preflight cache max-age. |
http_max_body_bytes |
10485760 |
Maximum accepted HTTP request body size. |
http_request_timeout_ms |
30000 |
Maximum HTTP request duration. |
http_rate_limit_enabled |
false |
Enables a simple global HTTP token-bucket rate limit. |
http_rate_limit_requests_per_second |
1000 |
Global HTTP rate-limit refill rate. |
http_rate_limit_burst |
1000 |
Global HTTP rate-limit burst capacity. |
http_principal_rate_limit_enabled |
false |
Enables per-principal HTTP token-bucket rate limiting. |
http_principal_rate_limit_requests_per_second |
100 |
Per-principal HTTP rate-limit refill rate. |
http_principal_rate_limit_burst |
200 |
Per-principal HTTP rate-limit burst capacity. |
logging_enabled |
true |
Enables structured server logging. |
log_format |
"compact" |
Log output format. Values: compact, json. |
log_level |
"info" |
Tracing filter level. |
log_destination |
"stderr" |
Log destination. Values: stderr, stdout, file, none. |
log_file_path |
null |
Required when log destination is file. |
require_auth |
false |
Rejects unauthenticated requests when true. |
bearer_token |
null |
Static full-admin bearer token. |
disable_bearer_token |
false |
Disables static bearer-token authentication even when configured. |
jwt_secret |
null |
HMAC JWT verification secret. |
jwt_public_key_pem |
null |
PEM public key for asymmetric JWT validation. |
jwt_issuer |
null |
Expected JWT issuer. |
jwt_audience |
null |
Expected JWT audience. |
jwt_algorithm |
null |
JWT algorithm override. |
jwks_url |
null |
JWKS endpoint URL. |
jwks_provider_id |
null |
Provider ID for logs/docs. |
jwks_refresh_interval_seconds |
300 |
JWKS background refresh interval. |
jwks_cache_ttl_seconds |
3600 |
JWKS cache TTL. |
jwks_http_timeout_ms |
3000 |
JWKS HTTP request timeout. |
jwt_leeway_seconds |
0 |
JWT clock-skew leeway. |
Use latlng-server --print-config-reference or latlng-cli config-reference to inspect the machine-readable reference for the installed binary. Operational guidance and storage config shapes are documented in docs/config.md.
HTTP:
- implemented in
latlng-http - supports static bearer token auth, HMAC JWTs, PEM-configured asymmetric JWTs, and JWKS-backed asymmetric JWTs
- static bearer token remains a full-admin service/dev token unless
disable_bearer_tokenis enabled - production guardrails can require an auth source with
require_auth,LATLNG_REQUIRE_AUTH=1, or--require-auth - claims-based authz is collection-scoped and uses the
latlng_permissionsclaim queries:readandsubscriptions:readare separate scopes/metricsreturns Prometheus text exposition andmetrics:readis separate fromadmin:*- See metrics.md for the Prometheus metric contract.
latlng-servercurrently serves plain HTTP, WebSocket, and Cap'n Proto; production deployments should terminate TLS at an upstream reverse proxy, load balancer, ingress, or service mesh- bearer/JWT credentials should only cross trusted networks or TLS-terminated paths
WebSocket:
- implemented in
latlng-ws - supports
auth,subscribe,psubscribe,ping, andquitcommand envelopes - accepts bearer/JWT auth during the upgrade path or through the first
authframe - enforces
subscriptions:readseparately from request/response query access - streams geofence events from the shared registry used by the other transports
Cap'n Proto:
- implemented in
latlng-capnp - uses generated schema bindings from
crates/latlng-schema/schema/latlng.capnp - runs on real async
capnp-rpc, not the previous blocking framed transport - uses session auth via the
auth(token)RPC when bearer/JWT auth is enabled, then enforces the same action-level authz model as the native HTTP routes timeout,configRewrite,readonly, and the other shipped admin RPCs route into the same runtime config model as HTTP- also exposes the internal native-only replication stream used by followers
- disabled by default; enable with
capnp_enabled = true,LATLNG_CAPNP_ENABLED=true, or--capnp-enabled=truewhen Cap'n Proto clients or replication are needed
CLI:
- uses typed
clapsubcommands with--helpoutput for command documentation - automatically attaches
Authorization: Bearer ...whenLATLNG_TOKENis set - covers
ping,healthz,server,info,collections,metrics,bounds,stats,get,set-point,nearby,config-get,config-set,config-validate,config-reference,config-rewrite,readonly,timeout,aofshrink,aof-verify,aof-backup, andaof-restore
Full auth/authz documentation, claim examples, and config reference:
Native query execution:
latlng-servernow enables the internalparallelquery feature by default- large
NEARBY,WITHIN,INTERSECTS,SCAN, andSEARCHqueries snapshot only their prefiltered candidate set, then run native-only parallel candidate evaluation while preserving deterministic ordering and cursor behavior - wasm builds stay on the serial path and do not depend on rayon
Storage modes:
- default: in-memory
- AOF server mode: set
LATLNG_AOF_PATH=/path/to/latlng.aof - SQLite: use
latlng-storage-sqlitedirectly from embedded/native applications - JSON/TOML config files can also select storage mode and auth/runtime settings
require_authfails startup when no bearer token or JWT verifier is configuredsubscriber_queue_capacitycontrols the bounded per-subscriber event mailbox size and defaults to4096native_executor_threadscontrols the number of dedicated native core worker threads and defaults to available CPU parallelismnative_executor_queue_limitcontrols the bounded native core submission queue and defaults tonative_executor_threads * 64webhook_timeout_mscontrols the HTTP request timeout for outbound webhook delivery and defaults to5000webhook_concurrency_limitcontrols the maximum number of concurrent outbound webhook deliveries and defaults to128webhook_queue_pathcontrols the SQLite materialized queue path used by the durable webhook outboxwebhook_retry_countdefaults to8retries after the initial attemptwebhook_retry_initial_backoff_msdefaults to200webhook_retry_max_backoff_msdefaults to30000webhook_lease_msdefaults to30000aof_writer_queue_limitcontrols the bounded submission queue for the AOF writer thread and defaults to4096aof_group_commit_delay_mscontrols how long the AOF writer waits to coalesce concurrent append requests and defaults to1aof_group_commit_max_requestscaps how many logical append requests can share one durable sync cycle and defaults to128server_iduniquely identifies the node for replication self-checks and reconnect validationfollow_host/follow_portconfigure follower mode at startupreplication_credentialconfigures the dedicated follower-to-leader authentication secretreplication_batch_sizecontrols how many storage entries are fetched per replication chunk and defaults to512replication_reconnect_backoff_mscontrols follower reconnect delay and defaults to1000http_cors_enabledenables HTTP CORS; keep it disabled unless browsers need direct accesshttp_cors_allowed_origins,http_cors_allowed_methods,http_cors_allowed_headers, andhttp_cors_max_age_secondsdefine the CORS policyhttp_rate_limit_enabled,http_rate_limit_requests_per_second, andhttp_rate_limit_burstconfigure a process-global limiter for accidental overload protectionhttp_principal_rate_limit_enabled,http_principal_rate_limit_requests_per_second, andhttp_principal_rate_limit_burstconfigure per-principal HTTP buckets for JWT subjects, static bearer service traffic, open access, and anonymous/invalid requestslogging_enabled,log_format,log_level,log_destination, andlog_file_pathconfigure structured HTTP, WebSocket, and Cap'n Proto access logs- durable webhook recovery across restart requires a durable primary log, so use AOF-backed server storage for that guarantee
You can set the AOF tuning values in all three configuration layers:
- config file fields:
aof_writer_queue_limitaof_group_commit_delay_msaof_group_commit_max_requests
- env vars:
LATLNG_AOF_WRITER_QUEUE_LIMITLATLNG_AOF_GROUP_COMMIT_DELAY_MSLATLNG_AOF_GROUP_COMMIT_MAX_REQUESTS
- CLI flags:
--aof-writer-queue-limit--aof-group-commit-delay-ms--aof-group-commit-max-requests
Example config:
listen_addr = "127.0.0.1:7421"
capnp_enabled = false
capnp_listen_addr = "127.0.0.1:7422"
[storage]
type = "aof"
path = "/var/lib/latlng/appendonly.aof"
aof_writer_queue_limit = 4096
aof_group_commit_delay_ms = 1
aof_group_commit_max_requests = 128Equivalent env/CLI overrides:
LATLNG_AOF_WRITER_QUEUE_LIMIT=4096 \
LATLNG_AOF_GROUP_COMMIT_DELAY_MS=1 \
LATLNG_AOF_GROUP_COMMIT_MAX_REQUESTS=128 \
cargo run -p latlng-server -- \
--aof /var/lib/latlng/appendonly.aof \
--aof-writer-queue-limit 4096 \
--aof-group-commit-delay-ms 1 \
--aof-group-commit-max-requests 128Validate a config before deployment:
latlng-server --config /etc/latlng/server.toml --check-config
latlng-cli config-validate /etc/latlng/server.toml
latlng-cli config-referenceEnable browser CORS and JSON access logs:
http_cors_enabled = true
http_cors_allowed_origins = ["https://app.example.com"]
http_cors_allowed_methods = ["GET", "POST", "PUT", "DELETE", "OPTIONS"]
http_cors_allowed_headers = ["authorization", "content-type", "x-request-id"]
http_cors_max_age_seconds = 600
logging_enabled = true
log_format = "json"
log_level = "info"
log_destination = "file"
log_file_path = "/var/log/latlng/server.log"Geofence and hook behavior:
- channel geofences are registered inside the core engine and exposed through WebSocket and Cap'n Proto streams
- hook and channel definitions are persisted in the primary log and replayed on restart
- webhook enqueue intents, retries, acknowledgements, and dead-letter transitions are recorded in the primary log
- mutating commands and their webhook enqueue intents are persisted in one atomic storage batch on durable backends
- startup recovery applies primary-log entries incrementally as they are replayed instead of buffering the whole log first
latlng-serverrebuilds a SQLite webhook queue from that log on startup and dispatches due jobs from the queue- replication is driven from committed storage entries, not the ephemeral channel/pubsub path
- followers authenticate with a dedicated replication credential, verify leader identity, and resume from the local last sequence when checksum verification matches
- checksum mismatch triggers a full local reset/resync from sequence
0 - followers are forced into read-only mode while following and reject normal reads until they have caught up once
- followers do not deliver durable webhooks from replicated log records; webhook dispatch stays leader-local
- WebSocket and Cap'n Proto subscription streams are wake-driven on native instead of using fixed poll intervals
- the native webhook outbox is wake-driven too: new work, queue rebuilds, and due retry deadlines wake the dispatcher instead of a steady idle poll
- request-style native HTTP, WebSocket, and Cap'n Proto core calls use the dedicated native executor; long-lived subscription bridges and background outbox work stay outside that pool
- outbound webhook requests use the configured
webhook_timeout_mstimeout and are processed concurrently up towebhook_concurrency_limit - failed deliveries use exponential backoff and become dead-lettered after
webhook_retry_countretries - webhook delivery is
at-least-once; payloads and headers include stable event/job IDs for receiver-side deduplication - roaming geofences,
ROAM, andNODWELLlogic are implemented in the portable geofence layer - WebSocket and Cap'n Proto event streams are exercised against the same live server in integration tests to keep payload parity honest
- subscriber mailboxes are bounded, in-memory queues; when full they drop the oldest events
FLUSHDBis a full reset: it clears collections, channel geofences, webhook geofences, geofence state, and the durable webhook queue- live subscribers stay connected across
FLUSHDB, but any buffered pre-flush events are discarded so post-flush streams only contain post-flush state
AOF behavior:
- complete but corrupt entries fail startup with a codec error
- a truncated final AOF frame is ignored during replay so crash-tail recovery can still rebuild the valid prefix
- when the truncated frame was a batched write, the whole batch is discarded rather than replaying a partial command-plus-webhook tail
- concurrent AOF appends are funneled through a dedicated writer thread and can share one flush/sync cycle without changing the “success means durable” contract
- logical compaction preserves current objects, active hooks/channels, and unresolved webhook jobs
- offline integrity verification reports entry count, sequence range, durable prefix bytes, truncated-tail status, and checksum
- offline backups are inspectable JSON files with version, source path, timestamp, sequence, and checksum metadata
- restores refuse to overwrite an existing target unless
--forceis supplied
Two benchmark layers now exist:
tools/latlng-benchmark: in-process engine-level benchmarking forlatlng-coretools/latlng-server-benchmark: black-box localhost benchmarking for the reallatlng-serverbinary over HTTP
The server benchmark tool is manual/local only for now. It is intentionally not wired into standard CI or nightly automation. Benchmark JSON outputs are written into the local benchmark-results/ directory, which is intentionally gitignored.
Benchmark binaries are local engineering tools and are intentionally not included in GitHub release binary archives. Release archives contain only latlng-server and latlng-cli; openapi.json is attached separately to GitHub Releases.
Build and run it with the Makefile entry points:
make bench-server
make bench-server-capnp
make bench-server-aof
make bench-server-tile38
make bench-server-compare OLD=benchmark-results/bench-server-memory.json NEW=benchmark-results/bench-server-aof.json
make bench-server-compare-capnp OLD=benchmark-results/bench-server-memory.json NEW=benchmark-results/bench-server-capnp.json
make bench-server-compare-tile38 OLD=benchmark-results/bench-server-memory.json NEW=benchmark-results/bench-server-tile38.jsonUseful overrides:
make bench-server BENCH_FLAGS="--warmup-secs 1 --measure-secs 2 --seed-objects 1000 --startup-records 1000"
make bench-server-capnp BENCH_FLAGS="--warmup-secs 1 --measure-secs 2 --seed-objects 1000"
make bench-server-aof BENCH_FLAGS="--concurrency-list 8,32 --measure-secs 10"
make bench-server-tile38 BENCH_FLAGS="--tile38-server-bin /usr/local/bin/tile38-server --scenario get_object_read"The benchmark tool reports:
- throughput in
ops/sec - mean,
p50,p95, andp99latency - error counts
- AOF startup replay duration as a separate scenario
Latlng runs default to HTTP; pass --latlng-transport capnp or use make bench-server-capnp to isolate protocol overhead from core engine work.
Tile38 runs use tile38-server by default and write a separate JSON file with engine: "tile38".
The default Tile38 mode is in-memory-style --appendonly no; pass --tile38-appendonly yes when comparing Tile38 AOF behavior. The startup replay scenario is latlng-only and is skipped for Tile38 runs. The standard scenario set includes the geofence-heavy fenced_set_point_write case.
Interpret the results as local engineering signals for before/after comparison, not as product SLA numbers.
cargo fmt --all
cargo clippy --workspace --all-targets
cargo test --workspace
cargo test -p latlng-server --test server_smoke
cargo check --target wasm32-unknown-unknown -p latlng-core --features wasm-bindings
cd packages/sdk && npm ci && npm run typecheck && npm run build && npm run docs:api && npm run test
cd packages/wasm && npm ci && npm run typecheck && npm run build && npm run test
cd packages/example-wasm && npm ci && npm run typecheck && npm run build