Skip to content

theoddden/terradev-gpu-cloud

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ Terradev GPU Cloud β€” OpenClaw Skill

Cross-cloud GPU provisioning, K8s clusters, and inference overflow for OpenClaw agents.

Your local GPU maxed out? One command to burst to cloud. Need a K8s cluster with H100s? Done. Want real-time pricing across 11+ providers? Instant.

What This Skill Does

Capability Example
GPU Price Quotes "Find me the cheapest H100 right now"
Multi-Cloud Provisioning "Spin up 4 A100s across the cheapest clouds"
K8s GPU Clusters "Create a Kubernetes cluster with 8 H100 nodes"
Inference Deployment "Deploy Llama 2 to a serverless endpoint"
HuggingFace Spaces "Share my model on HuggingFace with one click"
GPU Overflow "My local GPU is full, burst this job to cloud"
Instance Management "Show me all running instances and costs"
Cost Optimization "Find cheaper alternatives for my running GPUs"

Install

Via ClawHub

clawhub install terradev-gpu-cloud

Manual

# 1. Install Terradev CLI
pip install terradev-cli

# 2. Configure at least one provider
terradev setup runpod --quick
terradev configure --provider runpod

# 3. Copy the skill folder to your OpenClaw skills directory
cp -r terradev-gpu-cloud ~/.openclaw/skills/

Demo

You: "Find me the cheapest H100 right now"

πŸ” Querying 11 providers in parallel...

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Provider     β”‚ GPU      β”‚ Price/hr β”‚ Region β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ RunPod       β”‚ H100 80G β”‚ $1.89    β”‚ US-TX  β”‚
β”‚ Lambda Labs  β”‚ H100 80G β”‚ $1.99    β”‚ US-TX  β”‚
β”‚ Vast.ai      β”‚ H100 80G β”‚ $2.15    β”‚ US-OR  β”‚
β”‚ CoreWeave    β”‚ H100 80G β”‚ $2.49    β”‚ US-NJ  β”‚
β”‚ TensorDock   β”‚ H100 80G β”‚ $2.79    β”‚ US-TX  β”‚
β”‚ AWS (spot)   β”‚ p5.xlg   β”‚ $3.21    β”‚ us-e-1 β”‚
β”‚ GCP (spot)   β”‚ a3-high  β”‚ $3.89    β”‚ us-c-1 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ’° Best price: RunPod H100 @ $1.89/hr (47% cheaper than AWS)
You: "Create a K8s cluster with 4 H100s for training"

πŸ”§ Creating multi-cloud GPU cluster...
   β”œβ”€β”€ Karpenter NodeClass: spot-first H100 scheduling
   β”œβ”€β”€ KEDA autoscaling: 90% GPU utilization trigger
   β”œβ”€β”€ CNI ordering: EKS v21 race condition handled
   └── Node pools: RunPod + Lambda (cheapest spots)

βœ… Cluster 'training-cluster' ready in 47 seconds
   4x H100 80GB @ $1.89/hr avg = $7.56/hr total
You: "My RTX 4090 is maxed out running inference, overflow to cloud"

πŸ“Š Local GPU: RTX 4090 24GB β€” 98% utilized
🌊 Overflow strategy: Burst to cloud A10G (similar VRAM, $0.76/hr)

terradev provision -g A10G -n 2 --parallel 6

βœ… 2x A10G provisioned on Vast.ai @ $0.76/hr
   Endpoint: ssh root@<ip> -p 22
   Run: terradev execute -i <id> -c "python serve.py"

BYOAPI β€” Your Keys Stay Local

This skill never sees or stores your API keys. All credentials remain on your machine and are passed directly to cloud providers. This is the secure way to do cloud automation.

Supported Providers

RunPod Β· Vast.ai Β· AWS Β· GCP Β· Azure Β· Lambda Labs Β· CoreWeave Β· TensorDock Β· Oracle Cloud Β· Crusoe Cloud Β· DigitalOcean Β· HyperStack


Built with ❀️ by the Terradev team: https://github.com/theoddden/Terradev

License

MIT License β€” see LICENSE file for details.

About

🦞 OpenClaw skill: Cross-cloud GPU provisioning, K8s clusters, and inference overflow. BYOAPI β€” your keys never leave your machine.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages