A Celestia Data Availability (DA) proxy, enabling use of the canonical JSON RPC but intercepting and verifiably encrypting sensitive data before submission on the public DA network, and enable decryption on retrieval. Non-sensitive calls are unmodified.
Verifiable encryption is presently enabled via an SP1 Zero Knowledge Proof (ZKP), with additional proof systems planned
Jump to a section:
- Send requests this service: Interact
- Spin up an instance of the service: Operate
- Build & troubleshoot: Develop
Presently all HTTP requests to the proxy are transparently proxied to an upstream Celestia node, interception logic handles these JSON RPC methods:
blob.Submit
encrypts before proxy submission of a signed transaction to upstream gRPCapp
endpoint.blob.Get
andblob.GetAll
proxy result verifies the Verifiable Encryption proof, and decrypts before forwarding to the client.
At time of writing, as it should be possible to change these limitations internally:
It's possible to change these, but requires upstream involvement:
- Max blob size on Celestia is presently ~2MB
Please open an issue if you have any requests!
First you need to configure your environment and nodes.
The PDA proxy depends on a connection to:
- A [self] hosted Celestia Data Availability (DA) Node and Consensus App Node to submit and retrieve (verifiable encrypted) blob data.
- Easy integration with QuickNode for both nodes at one endpoint, token auth supported.
- (Optional) Succinct prover network as a provider to generate Zero-Knowledge Proofs (ZKPs) of data existing on Celestia. See the ZKP program for details on what is proven. Then any HTTP1 client works to send Celestia JSON RPC calls to the proxy:
# Proxy running on 127.0.0.1:26657
# See: <https://mocha.celenium.io/blob?commitment=S2iIifIPdAjQ33KPeyfAga26FSF3IL11WsCGtJKSOTA=&hash=AAAAAAAAAAAAAAAAAAAAAAAAAFHMGnPWX5X2veY=&height=4499999>
source .env
# blob.Get
curl -H "Content-Type: application/json" -H "Authorization: Bearer $CELESTIA_NODE_WRITE_TOKEN" -X POST \
--data '{ "id": 1, "jsonrpc": "2.0", "method": "blob.Get", "params": [ 4499999, "AAAAAAAAAAAAAAAAAAAAAAAAAFHMGnPWX5X2veY=", "S2iIifIPdAjQ33KPeyfAga26FSF3IL11WsCGtJKSOTA="] }' \
$PDA_SOCKET
# blob.GetAll
curl -H "Content-Type: application/json" -H "Authorization: Bearer $CELESTIA_NODE_WRITE_TOKEN" -X POST \
--data '{ "id": 1, "jsonrpc": "2.0", "method": "blob.GetAll", "params": [ 4499999, [ "AAAAAAAAAAAAAAAAAAAAAAAAAFHMGnPWX5X2veY=" ] ] }' \
$PDA_SOCKET
# blob.Submit (dummy data)
# Note: send "{}" as empty `tx_config` object, so the node uses it's default key to sign & submit to Celestia
curl -H "Content-Type: application/json" -H "Authorization: Bearer $CELESTIA_NODE_WRITE_TOKEN" -X POST \
--data '{ "id": 1, "jsonrpc": "2.0", "method": "blob.Submit", "params": [ [ { "namespace": "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAMJ/xGlNMdE=", "data": "DEADB33F", "share_version": 0, "commitment": "aHlbp+J9yub6hw/uhK6dP8hBLR2mFy78XNRRdLf2794=", "index": -1 } ], { } ] }' \
https://$PDA_SOCKET \
--verbose \
--insecure
# ^^^^ DO NOT use insecure TLS in real scenarios!
# blob.Submit (example input ~1.5MB)
cd scripts
./test_example_data_file_via_curl.sh
Celestia has many API client libraries to build around a PDA proxy.
sequenceDiagram
participant JSON RPC Client
participant PDA Proxy
participant Celestia Node
JSON RPC Client->>+PDA Proxy: blob.Submit(blobs, options)<br>{AUTH_TOKEN in header}
PDA Proxy->>PDA Proxy: Job Processing...<br>{If no DB entry, start new zkVM Job}
PDA Proxy->>-JSON RPC Client: Response{"Call back"}
PDA Proxy->>PDA Proxy: ...Job runs to completion...
JSON RPC Client->>+PDA Proxy: blob.Submit(blobs, options)<br>{AUTH_TOKEN in header}
PDA Proxy->>PDA Proxy: Query Job DB<br>Done!<br>{Job Result cached}
PDA Proxy->>Celestia Node: blob.Submit(V. Encrypt. blobs, options)
Celestia Node->>PDA Proxy: Response{Inclusion Block Height}
PDA Proxy->>-JSON RPC Client: Response{Inclusion Block Height}
sequenceDiagram
participant JSON RPC Client
participant PDA Proxy
participant Celestia Node
JSON RPC Client->>+PDA Proxy: blob.Get(height, namespace, commitment)
PDA Proxy->>Celestia Node: <Passthrough>
Celestia Node->>PDA Proxy: Response{namespace,data,<br>share_version,commitment,index}
PDA Proxy->>PDA Proxy: *Try* deserialize & decrypt
PDA Proxy->>-JSON RPC Client: *Success* -> Response{...,decrypted bytes,...}
PDA Proxy->>JSON RPC Client: *Failure* -> <Passthrough>
sequenceDiagram
participant JSON RPC Client
participant PDA Proxy
participant Celestia Node
JSON RPC Client->>+PDA Proxy: Request{<Anything else>}<br>{AUTH_TOKEN in header}
PDA Proxy->>Celestia Node: <Passthrough>
Celestia Node->>PDA Proxy: <Passthrough>
PDA Proxy->>-JSON RPC Client: Response{<Normal API response}
TODO: notice on single job at a time
- single GPU 100% used per job
- presently no way to scale on multi-GPU
Most users will want to pull and run this service using Docker or Podman via container registry, see running containers.
To build and run, see developing instructions
You can depend fully on providers for proving and DA nodes and run this proxy on a "potato" (any minimal cloud instance should do, the service is extremely lightweight), you likely want to self-host. With providers you must:
- fully trust the prover with all plaintext data - thus no privacy is provided, and if using a prover marketplace, you likely will be revealing that plaintext to the public... not gonna fly with use cases for this product.
- fully trust the DA node to tell you the truth about DA data - as you are not validating consensus with a light node. Likely you also need fail-over in case of DA node providers not being responsive, and blocking your interactions with DA upstream.
To run fully trustless, self-hosted, set of services such that you operate your own prover and Celestia node, you need:
-
A machine to run with a minimum of:
- NVIDIA GPU with 20GB+ of VRAM (Tested on L4)
- Must support CUDA 12+
- 4+ CPU cores
- 16GB+ RAM
- Ports accessible (by default):
- service listening at
TODO
- Light client (local or remote) over
26658
- (Optional) Succinct prover network over
443
- service listening at
Example AWS instance: g6.xlarge (single L4 GPU + 8 vCPU)
- NVIDIA GPU with 20GB+ of VRAM (Tested on L4)
-
A Celestia Light Node installed & running accessible on
localhost
, or elsewhere. Alternatively, use an RPC provider you trust.- Configure and fund a Celestia Wallet for the node to sign and send transactions with.
- Generate and set a node JWT with
write
permissions and set in.env
for the proxy to use.
Required and optional settings are best configured via a .env
file. See example.env
for configurable items.
cp example.env .env
# edit .env
The images are available:
# ghcr:
docker pull ghcr.io/celestiaorg/pda-proxy
# Docker hub:
docker pull celestiaorg/pda-proxy
Don't forget you need to configure your environment.
Note: only required for self-hosting the ZK prover.
As we don't want to embed huge files, secrets, and dev only example static files, you will need to place them on the host machine in the following paths:
- Setup a DNS to point to your instance with email and domain.
- Create and update an
.env
(see config. - Select a base OS image to run on the host that includes CUDA Container Toolkit or install it manually.
- See: CUDA Container Toolkit install instructions and AWS NVIDIA docs (or your cloud host's docs for GPU base OS images).
- Run ./scripts/setup_remote_host.sh or otherwise see the scripts to manually configure similarly.
- ONLY for development & testing! copy the unsafe example TLS files from ./service/static to
app/static
on the host- You should use:
TLS_CERTS_PATH=/app/static/sample.pem TLS_KEY_PATH=/app/static/sample.rsa
- ONLY for development & testing! copy the unsafe example TLS files from ./service/static to
Note that scripts run on the host update the /app/.env
file with specific settings for the Celestia node.
Logs will print very important information please read those carefully.
With a correct setup of the host, you can startup both the proxy and local celestia node with:
docker compose --env-file /app/.env up -d
Or manually just the proxy itself:
# if you are developing from this repo:
just docker-run
# If you are only running:
source .env
mkdir -p $PDA_DB_PATH
# Note socket assumes running "normally" with docker managed by root
docker run --rm -it \
--user $(id -u):$(id -g) \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $PDA_DB_PATH:$PDA_DB_PATH \
--env-file {{ env-settings }} \
--env RUST_LOG=pda_proxy=debug \
--network=host \
-p $PDA_PORT:$PDA_PORT \
"$DOCKER_CONTAINER_NAME"
First, some tooling is required:
- Rust & Cargo - install instructions
- SP1 zkVM Toolchain - install instructions
- Protocol Buffers (Protobuf) compiler - official examples contain install instructions
- (Optional) Just - a modern alternative to
make
installed - NVIDIA compiler & container toolkit https://docs.succinct.xyz/docs/sp1/generating-proofs/hardware-acceleration#software-requirements
Then:
-
Clone the repo
git clone https://github.com/your-repo-name/pda-proxy.git cd pda-proxy
-
Choose a Celestia Node
- See the How-to-guides on nodes to run one yourself, or choose a provider & set in
env
. - NOTE: You must have the node synced back to the oldest possible height you may encounter in calling this service for it to fulfill that request.
- See the How-to-guides on nodes to run one yourself, or choose a provider & set in
-
Build and run the service
# NOT optimized, default includes debug logs printed just run-debug # Optimized build, to test realistic performance w/ INFO logs just run-release
There are many other helper scripts exposed in the justfile, get a list with:
# Print just recipes
just
Docker and Podman are configured in Dockerfile to build an image with that includes a few caching layers to minimize development time & final image size -> publish where possible. To build and run in a container:
# Using just
just docker-build
just docker-run
# Manually
## Build
[docker|podman] build -t eq_service .
## Setup
source .env
mkdir -p $PDA_DB_PATH
## Run (example)
[docker|podman] run --rm -it -v $PDA_DB_PATH:$PDA_DB_PATH --env-file .env --env RUST_LOG=eq_service=debug --network=host -p $PDA_PORT:$PDA_PORT pda_proxy
Importantly, the DB should persist, and the container must have access to connect to the DA light client (likely port 26658) and Succinct network ports (HTTPS over 443).
The images are built and published for releases - see running containers for how to pull them.
Based heavily on https://github.com/celestiaorg/eq-service.