Feather is a light full-RPC node for the Gonka inference blockchain. It runs the full network node (chain + API) without any ML node, and adds a ClickHouse-backed analytics and indexing layer on top.
Runs on a Raspberry Pi 5. Feather is light enough to run on commodity ARM64 hardware - a Pi 5 with 8 GB RAM and a 128 GB+ SD card is enough to serve the full RPC surface on your LAN. See
docs/raspberry-pi/README.mdfor a step-by-step guide.
- Full RPC surface - every endpoint a real Gonka participant exposes, unified behind a single gateway port
- No ML node required - inference endpoints return clean errors; chain queries, participants, epochs, governance, stats all work
- ClickHouse indexer - every block, transaction, and event indexed into ClickHouse for fast analytical queries
- Enhanced analytics API - tx search by sender/type/time, token usage by model, developer stats, epoch summaries
- P2P node - syncs directly from Gonka mainnet peers, fully self-sovereign
The stack consists of five containers:
| Container | Image | Role |
|---|---|---|
feather-chain |
ghcr.io/product-science/inferenced |
CometBFT chain node (P2P, RPC, REST, gRPC) |
feather-api |
ghcr.io/product-science/api |
Gonka dAPI node (participants, models, epochs, bridge) |
feather-rpc-proxy |
Built from Dockerfile.rpcproxy |
Upstream RPC proxy used during state-sync |
feather-clickhouse |
clickhouse/clickhouse-server |
Columnar analytics database |
feather-gateway |
Built from Dockerfile |
Single-port gateway, caching layer, ClickHouse analytics API |
- Docker and Docker Compose v2
- A public IP if you want inbound P2P connections (optional for syncing)
# 1. Configure
cd deploy
cp config.env .env
# Edit .env - at minimum set P2P_EXTERNAL_ADDRESS to your public IP:26656
# 2. Launch
docker compose up -d --build
# 3. Check status
docker compose logs -f featherOnce the chain node finishes syncing, the gateway is available at http://localhost:8080.
All configuration is via environment variables set in deploy/.env. Copy deploy/config.env as a starting point.
These are passed to the inferenced container.
| Variable | Default | Description |
|---|---|---|
KEY_NAME |
feather-rpc |
Cosmos keyring key name used by the chain node |
CHAIN_ID |
gonka-mainnet |
Network chain ID |
FEATHER_MONIKER |
feather-rpc |
P2P moniker advertised to peers |
SEED_NODE_RPC_URL |
http://node2.gonka.ai:8000/chain-rpc/ |
Bootstrap RPC for peer/genesis discovery |
SEED_NODE_P2P_URL |
tcp://node2.gonka.ai:5000 |
Bootstrap P2P address |
P2P_EXTERNAL_ADDRESS |
(empty) | Required. Your public IP:PORT for P2P (e.g. 203.0.113.50:26656) |
P2P_PORT |
26656 |
Host port mapped to the chain P2P listener |
SYNC_WITH_SNAPSHOTS |
false |
Enable CometBFT state-sync for fast initial catch-up |
RPC_SERVER_URL_1 |
(empty) | First RPC peer for state-sync light-client verification |
RPC_SERVER_URL_2 |
(empty) | Second RPC peer for state-sync light-client verification |
Seed node defaults will change. The current defaults for
SEED_NODE_RPC_URLandSEED_NODE_P2P_URLpoint directly at a genesis node (node2.gonka.ai). These will be updated to bootstrap fromrpc.gonka.gginstead, to avoid overloading genesis infrastructure. When that happens the defaults inconfig.envanddocker-compose.ymlwill be updated accordingly.
These are passed to the Feather gateway container.
| Variable | Default | Description |
|---|---|---|
LISTEN_ADDR |
:8080 |
Gateway bind address. Use 127.0.0.1:8080 to restrict to localhost only |
FEATHER_PORT |
8080 |
Host port mapped to the gateway |
CHAIN_RPC_URL |
http://chain-node:26657 |
Internal URL of the chain CometBFT RPC |
CHAIN_REST_URL |
http://chain-node:1317 |
Internal URL of the chain Cosmos REST (LCD) |
CHAIN_GRPC_URL |
chain-node:9090 |
Internal URL of the chain gRPC endpoint |
API_URL |
http://api:9000 |
Internal URL of the Gonka API public port |
API_ADMIN_URL |
http://api:9200 |
Internal URL of the Gonka API admin port |
CLICKHOUSE_DSN |
clickhouse://feather:feather@clickhouse:9000/feather |
ClickHouse connection string |
CHAIN_LOG_PATH |
(empty) | Path to shared chain log file (enables state-sync progress on dashboard) |
| Variable | Default | Description |
|---|---|---|
INDEXER_ENABLED |
true |
Enable block/tx/event indexing into ClickHouse |
INDEXER_START_HEIGHT |
0 |
Height to begin indexing from (0 = chain start or state-sync height) |
INDEXER_BATCH_SIZE |
100 |
Number of blocks to index per batch |
INDEXER_POLL_INTERVAL |
2s |
How often the indexer polls for new blocks |
STATS_COLLECTOR_ENABLED |
true |
Collect inference stats from the API node |
STATS_COLLECTOR_INTERVAL |
30s |
How often inference stats are collected |
PAYLOAD_COLLECTOR_ENABLED |
true |
Collect inference payload metadata from the API node |
PAYLOAD_COLLECTOR_INTERVAL |
30s |
How often payload metadata is collected |
| Variable | Default | Description |
|---|---|---|
CH_PASSWORD |
feather |
ClickHouse password (used by both ClickHouse server and gateway) |
CH_HTTP_PORT |
8123 |
Host port mapped to ClickHouse HTTP interface (useful for debugging) |
Syncing from genesis can take days. State-sync downloads a recent snapshot and starts from there:
SYNC_WITH_SNAPSHOTS=true
RPC_SERVER_URL_1=http://144.76.1.155:26657
RPC_SERVER_URL_2=http://46.4.52.158:26657The indexer will automatically begin indexing from the snapshot height.
Feather wraps the stock inferenced image with deploy/chain-entrypoint.sh, which applies a config overlay after the node initializes. These settings are persisted in the chain-data volume and survive restarts.
| Setting | Value | Why |
|---|---|---|
indexer |
"null" |
Disables built-in tx indexer - ClickHouse replaces it |
log_level |
"warn" |
Reduces log noise and I/O overhead |
prometheus |
true |
Enables the Prometheus metrics endpoint on :26660 |
| Setting | Value | Why |
|---|---|---|
pruning |
"custom" |
Aggressive pruning to minimize disk usage |
pruning-keep-recent |
1000 |
Keep only the last 1000 states |
pruning-interval |
100 |
Run pruning every 100 blocks |
min-retain-blocks |
1000 |
Minimum block retention for state-sync serving |
iavl-cache-size |
2000000 |
Larger IAVL tree cache for faster state reads |
iavl-disable-fastnode |
false |
Keeps fast-node enabled for query performance |
These are set in docker-compose.yml and are not user-configurable:
| Flag | Value | Why |
|---|---|---|
REST_API_ACTIVE |
true |
Ensures the Cosmos REST API is exposed |
SNAPSHOT_INTERVAL |
0 |
Node does not produce local snapshots (saves disk I/O) |
SNAPSHOT_KEEP_RECENT |
0 |
No local snapshots retained |
The gateway includes an in-memory LRU response cache (10,000 entries) with endpoint-specific TTLs. Only GET/HEAD requests are cached; mutations (POST broadcasts) always pass through.
| Endpoint family | Cache TTL | Notes |
|---|---|---|
/status |
1s | |
/block?height=N |
24h | Height-pinned blocks are immutable |
/block (latest) |
1s | |
/block_results?height=N |
24h if height specified, 1s otherwise | |
/tx?hash=... |
24h | Committed txs are immutable |
/validators |
30s | Served from ClickHouse when snapshot available |
/genesis |
1h | |
/net_info |
10s | |
/consensus_state |
2s | |
/cosmos/bank/* |
3s | |
/cosmos/staking/* |
15s | |
/cosmos/gov/* |
30s | |
/cosmos/* (other) |
5s | |
/ibc/*, /cosmwasm/* |
10s | |
/productscience/*, /inference/* |
10s | |
/swagger/* |
1h | |
/tx_search |
Served from ClickHouse | Full historical search, Tendermint-compatible response |
A background cache warmer pre-fetches hot endpoints (/status, /block, /block?height=H, /validators, /net_info) each time a new block is indexed.
Cache stats are available at GET /dashboard/api/cache.
Feather ships with a built-in web UI. All pages are served from the gateway port with no external dependencies.
| Page | Path | What it shows |
|---|---|---|
| Node Dashboard | /dashboard |
Main overview - node identity, sync status, latest blocks, recent transactions, connected peers, validator set, registered endpoints, and a live request log with RPS sparkline |
| CometBFT Metrics | /dashboard/metrics |
Prometheus instrumentation from the chain node - block rate, consensus rounds, mempool size, peer count, Go runtime stats (goroutines, heap, GC), and historical metric graphs |
| Network Status | /dashboard/network |
Chain RPC status, network info, consensus state, and node connectivity details in one view |
| Live P2P Traffic | /dashboard/traffic |
Real-time P2P bandwidth - bytes in/out per peer, send/receive rates, channel breakdowns, and a live-updating SSE stream with traffic history graphs |
Each page also exposes a JSON API under /dashboard/api/* for programmatic access (see RPC.md for the full list).
See RPC.md for the full endpoint reference. Summary of route families:
| Method | Path | Source |
|---|---|---|
GET |
/health |
Gateway |
GET |
/dashboard |
Gateway (web UI) |
GET |
/dashboard/api/* |
Gateway (JSON APIs for dashboard) |
GET |
/v1/analytics/txs |
ClickHouse |
GET |
/v1/analytics/txs/:hash |
ClickHouse |
GET |
/v1/analytics/blocks/:height/events |
ClickHouse |
GET |
/v1/analytics/stats/summary |
ClickHouse |
GET |
/v1/analytics/stats/tokens-by-model |
ClickHouse |
GET |
/v1/analytics/stats/developer/:address |
ClickHouse |
GET |
/v1/analytics/indexer/status |
ClickHouse |
| Prefix | Upstream | Description |
|---|---|---|
/v1/* |
Gonka API (:9000) |
Participants, models, epochs, pricing, bridge, PoC, chat completions |
/v2/* |
Gonka API (:9000) |
MLNode callback aliases |
/status, /block, /tx_search, ... |
Chain RPC (:26657) |
Full Tendermint JSON-RPC |
/rpc/* |
Chain RPC (:26657) |
Alternative Tendermint prefix |
/chain-rpc/* |
Chain RPC (:26657) |
Production-style alias |
/cosmos/*, /ibc/*, /cosmwasm/* |
Chain REST (:1317) |
Cosmos LCD endpoints |
/productscience/*, /inference/* |
Chain REST (:1317) |
Gonka module REST queries |
/chain-api/* |
Chain REST (:1317) |
Production-style alias (strips prefix) |
| Port | Service | Exposed to Host | Configurable Via |
|---|---|---|---|
| 8080 | Feather gateway | Yes | FEATHER_PORT |
| 26656 | Chain P2P | Yes | P2P_PORT |
| 8123 | ClickHouse HTTP | Yes | CH_HTTP_PORT |
| 26657 | Chain CometBFT RPC | Internal only | - |
| 1317 | Chain Cosmos REST | Internal only | - |
| 9090 | Chain gRPC | Internal only | - |
| 9000 | Gonka API public | Internal only | - |
| 9200 | Gonka API admin | Internal only | - |
To expose an internal port to the host, add a ports: mapping in docker-compose.yml.
To restrict a host-exposed port to localhost, prefix the mapping with 127.0.0.1: (e.g. 127.0.0.1:8080:8080).
# From your laptop - rsyncs code, rebuilds only the gateway container
./deploy/sync-gateway-only.shOverride target: FEATHER_DEPLOY_REMOTE=root@host FEATHER_DEPLOY_PATH=/opt/feather ./deploy/sync-gateway-only.sh
This does not restart chain-node, api, or clickhouse.
make docker-up # build and start all services
make docker-down # stop all services
make docker-ps # check container status
make docker-logs # tail all container logs
make docker-logs-feather # tail gateway logs only
make docker-logs-chain # tail chain node logs only
make docker-logs-api # tail API node logs only# Probe every route family (read-only, does not mutate chain state)
./scripts/test-feather-endpoints.sh
# Against a remote instance
FEATHER_BASE_URL=http://host:8080 ./scripts/test-feather-endpoints.sh# Build locally (requires Go 1.23+)
make build
# Run locally (needs chain-node, api, clickhouse accessible)
CHAIN_RPC_URL=http://localhost:26657 \
API_URL=http://localhost:9000 \
CLICKHOUSE_DSN=clickhouse://feather:feather@localhost:9000/feather \
make run
# Run tests
make test
# Tidy modules
make tidy- No per-surface toggle. All API surfaces (Tendermint RPC, Cosmos REST, Gonka API, Dashboard, Analytics) are registered unconditionally. You cannot selectively disable individual surfaces via config today.
- State-sync
tx_searchgap. CometBFT's built-intx_searchonly indexes events from blocks processed after the node started. The ClickHouse-backed/tx_searchand/v1/analytics/txsendpoints fill this gap with full historical coverage. - Config overlay timing. The chain entrypoint patches
config.toml/app.tomlafterinit-docker.shstarts. On the very first launch, some settings take full effect only after the container is restarted once (the config volume persists, so subsequent starts apply everything immediately).
Feather is open source and we actively encourage the community to help make it better. Run your own Feather node, test it, break it, and report what you find. Bug reports, performance observations, and pull requests are all welcome.
A few ways you can help:
- Run a Feather node and report any sync issues, crashes, or unexpected behavior.
- Point your tools at your own node (or at
rpc.gonka.gg) instead of hitting genesis nodes directly. Every self-hosted node reduces pressure on core infrastructure and makes the network more resilient. - File issues for anything that doesn't work as documented - especially around state-sync, indexer gaps, or cache correctness.
- Submit PRs for fixes, new analytics queries, or missing config knobs.
The healthier the RPC layer, the healthier the chain. Every Feather instance running is one less client hammering genesis nodes.
MIT

