Skip to content

NETCATSRL/solana-rpc-proxy

Repository files navigation

Solana RPC Proxy

A high-performance reverse proxy for Solana RPC nodes with load balancing, rate limiting, health monitoring, and failover handling.

Features

  • HTTP and WebSocket proxying
  • Multiple load balancing strategies (round-robin, weighted)
  • Global and per-node rate limiting
  • Health monitoring for backend nodes
  • Automatic failover for unhealthy nodes
  • Metrics endpoint (Prometheus format)
  • Structured JSON logging
  • IP whitelisting and API key authentication

Quick Start

Prerequisites

  • Rust 1.73+
  • Docker (optional)

Building from Source

cargo build --release

Running

./target/release/solana-rpc-proxy config.yaml

Docker

docker-compose up --build

Configuration

See config.example.yaml for a sample configuration file. Key sections:

  • server: HTTP/WebSocket binding addresses
  • backends: List of Solana RPC nodes
  • rate_limiting: Global and burst rate limits
  • security: IP restrictions and API keys
  • logging: Log level and format

Endpoints

  • HTTP Proxy: http://localhost:8080
  • WebSocket Proxy: ws://localhost:8081

Mainnet Configuration Example

server:
  http_addr: "0.0.0.0:80"
  ws_addr: "0.0.0.0:81"

backends:
  - url: "https://api.mainnet-beta.solana.com"
    weight: 3
    max_qps: 50
  - url: "https://solana-api.projectserum.com"
    weight: 2
    max_qps: 75
  - url: "https://ssc-dao.genesysgo.net"
    weight: 2
    max_qps: 100

rate_limiting:
  global_max_qps: 500
  burst_size: 50

security:
  allowed_ips:
## Testing with Mainnet

To test the proxy with real Solana mainnet nodes:

1. Update your configuration with real mainnet RPC endpoints
2. Start the proxy:
```bash
cargo run --release config.yaml
  1. Send test requests:
# HTTP request
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getVersion"}' http://localhost:8989

# WebSocket connection
wscat -c ws://localhost:8990
  1. Monitor logs for request handling and backend selection
  2. Check metrics at http://localhost:8080/metrics
  3. Simulate failures by taking backends offline to test failover

For production use:

  • Use proper TLS termination
  • Set appropriate rate limits
  • Monitor performance metrics
  • Rotate API keys regularly
    • "192.168.1.0/24"
    • "10.0.0.0/8" api_keys:
    • "your-secure-api-key" require_authentication: true
- Metrics: `http://localhost:8080/metrics`

## Load Balancing Strategies
- `RoundRobin`: Distributes requests equally
- `Weighted`: Distributes based on node weights
- `LeastConnections`: Sends to least busy node

## Health Monitoring
The proxy continuously checks node health:
- HTTP health checks at `/health`
- Latency measurements
- Automatic removal of unhealthy nodes
- Periodic rechecking of failed nodes

## Rate Limiting
- Global rate limit applied per client IP
- Per-node rate limits
- Burst capacity support

## License
Apache 2.0

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published