Local compute and endpoint manager for private LLM inference - local llama.cpp, Vast.ai GPUs, managed providers (Together AI), transparent proxy.
-
Updated
May 5, 2026 - Python
Local compute and endpoint manager for private LLM inference - local llama.cpp, Vast.ai GPUs, managed providers (Together AI), transparent proxy.
GhostNexus provider node — share your GPU and earn on the decentralized GPU cloud
Add a description, image, and links to the gpu-rental topic page so that developers can more easily learn about it.
To associate your repository with the gpu-rental topic, visit your repo's landing page and select "manage topics."