Distributed GPU compute node for the Gelotto AI image generation network.
Power Node lets you contribute GPU compute power to generate AI images and earn rewards. It connects to the Gelotto network, claims generation jobs, and uses your NVIDIA GPU to create images.
- NVIDIA GPU with 8GB+ VRAM (RTX 3060, RTX 4060, or newer)
- Linux with NVIDIA drivers installed
- Python 3.10+
- CUDA toolkit (for GGUF mode, to build stable-diffusion.cpp)
- Disk space:
- GGUF mode: ~15GB (base) + ~500MB (face-swap)
- PyTorch mode: ~40GB (base) + ~29GB (video, optional) + ~500MB (face-swap)
| GPU | VRAM | Mode | Video | Disk Space | Performance |
|---|---|---|---|---|---|
| RTX 3060 | 12GB | GGUF | No | ~15GB | Good |
| RTX 4060 | 8GB | GGUF | No | ~10GB | Good |
| RTX 4070/Ti | 12GB | GGUF | No | ~15GB | Better |
| RTX 4080 | 16GB | GGUF | No | ~15GB | Better |
| RTX 4090 | 24GB | GGUF | No | ~15GB | Best |
| RTX 5070 Ti | 16GB | PyTorch | Yes | ~70GB | Best |
| RTX 5080 | 16GB | PyTorch | Yes | ~70GB | Best |
| RTX 5090 | 32GB | PyTorch | Yes | ~70GB | Best |
Notes:
- RTX 50-series (Blackwell) GPUs require PyTorch mode due to CUDA compute capability 12.0
- Video generation requires PyTorch mode with 12GB+ VRAM (automatically detected)
Visit https://gelotto.io/workers and register with a unique hostname.
You'll receive:
- Worker ID - Your unique identifier
- API Key - Your authentication token (save this securely!)
Option A: One-line installer (recommended)
curl -sSL https://raw.githubusercontent.com/Gelotto/power-node/main/install.sh | bashOption B: Clone and install
git clone https://github.com/Gelotto/power-node.git
cd power-node
./install.shThe installer will:
- Detect your GPU and VRAM
- Download the appropriate model (~9GB for GGUF, ~31GB for PyTorch)
- Auto-download optional models based on GPU capability:
- Video generation (Wan2.1, ~29GB) - PyTorch mode with 12GB+ VRAM
- Face-swap (~500MB) - Any mode with 6GB+ VRAM
- Set up Python environment and dependencies
- Create configuration files
Installation time: 10-30 minutes depending on internet speed (longer if video model is downloaded)
Skip optional models with these flags:
# Skip video model only
curl -sSL ... | bash -s -- --no-video
# Skip face-swap only
curl -sSL ... | bash -s -- --no-faceswap
# Skip all optional models (minimal install)
curl -sSL ... | bash -s -- --minimalThe installer will warn if disk space is below 50GB when downloading the video model.
Edit ~/.power-node/config/config.yaml and add your credentials:
api:
key: "YOUR_API_KEY_HERE"
worker:
id: "YOUR_WORKER_ID_HERE"~/.power-node/start.shYou should see:
Starting worker...
Using credentials - Worker ID: abc123...
API Key: wk_xxxxxxxxxx...
Starting Python inference service...
Worker fully initialized!
Starting job loop (polling every 5s)...
To start automatically on boot:
sudo cp ~/.power-node/power-node.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable power-node
sudo systemctl start power-nodeCheck status:
sudo systemctl status power-node
sudo journalctl -u power-node -f- Register - Create an account at gelotto.io/workers
- Connect - Power Node connects to the Gelotto API
- Claim Jobs - The node polls for available generation jobs
- Generate - Your GPU generates images using the Z-Image model
- Upload - Results are uploaded back to the network
- Earn - Receive rewards for completed jobs
api:
url: https://api.gelotto.io # API endpoint
key: "" # Your API key
model:
service_mode: gguf # gguf or pytorch (auto-detected)
vram_gb: 8 # Your GPU VRAM
worker:
id: "" # Your worker ID
hostname: "" # Auto-detected if empty
gpu_info: "" # Auto-detected if empty
poll_interval: 5s # Job polling interval
heartbeat_interval: 30s # Heartbeat interval
python:
executable: ~/.power-node/venv/bin/python3
script_path: ~/.power-node/scripts/inference.py
script_args: [] # Model paths (auto-configured)
env:
PYTORCH_CUDA_ALLOC_CONF: "expandable_segments:True"Ensure NVIDIA drivers are installed:
nvidia-smiThe node automatically configures for your GPU's VRAM. For 8GB GPUs:
- Uses quantized Q4_0 models
- Enables VAE tiling
- Offloads components to CPU
If you still get OOM errors, try reducing resolution or closing other GPU applications.
- Check your API key and worker ID are correct in config.yaml
- Verify network connectivity:
curl https://api.gelotto.io/health - Check if the API is reachable from your network
If model download was interrupted:
rm -rf ~/.power-node/models
~/.power-node/install.shCheck Python environment:
source ~/.power-node/venv/bin/activate
python3 -c "import torch; print(torch.cuda.is_available())"Requires Go 1.21+:
git clone https://github.com/Gelotto/power-node.git
cd power-node
go build -o bin/power-node ./cmd/power-node| Variable | Description | Default |
|---|---|---|
POWER_NODE_DIR |
Installation directory | ~/.power-node |
API_URL |
Backend API URL | https://api.gelotto.io |
MIT License - see LICENSE for details.