Skills for AI agents to manage GPU workloads on Runpod.
Complete knowledge of the runpod-flash framework - SDK, CLI, architecture, deployment, and codebase. Use when working with runpod-flash code, writing @remote functions, configuring resources, debugging deployments, or understanding the framework internals.
Manage GPU pods, serverless endpoints, templates, volumes, and models.
npx skills add runpod/skillsWorks with Claude Code, Cursor, GitHub Copilot, Windsurf, Cline, and 17+ other AI agents.
runpodctl doctorAsk your AI agent:
- "Create a pod with an RTX 4090"
- "List my pods"
- "What GPUs are available?"
- "Show my account balance"
- "Deploy a serverless endpoint"
Access exposed ports on your pod:
https://<pod-id>-<port>.proxy.runpod.net
Example: https://abc123xyz-8888.proxy.runpod.net
https://api.runpod.ai/v2/<endpoint-id>/run # Async request
https://api.runpod.ai/v2/<endpoint-id>/runsync # Sync request
https://api.runpod.ai/v2/<endpoint-id>/health # Health check
https://api.runpod.ai/v2/<endpoint-id>/status/<job-id> # Job status
flash/
└── SKILL.md
runpodctl/
└── SKILL.md
Apache-2.0