Skip to content
@gpustack

GPUStack

Simple, scalable AI model deployment on GPU clusters

Pinned Loading

  1. gpustack Public

    Simple, scalable AI model deployment on GPU clusters

    Python 3k 301

  2. gguf-parser-go Public

    Review/Check GGUF files and estimate the memory usage and maximum tokens per second.

    Go 176 18

  3. llama-box Public

    LM inference server implementation based on *.cpp.

    C++ 226 20

  4. vox-box Public

    A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.

    Python 127 18

Repositories

Showing 10 of 10 repositories
  • gpustack-ui Public
    TypeScript 39 Apache-2.0 28 0 0 Updated Jun 24, 2025
  • llama-box Public

    LM inference server implementation based on *.cpp.

    C++ 226 MIT 20 4 0 Updated Jun 24, 2025
  • gpustack Public

    Simple, scalable AI model deployment on GPU clusters

    Python 2,969 Apache-2.0 301 422 (2 issues need help) 12 Updated Jun 24, 2025
  • Python 0 Apache-2.0 1 5 0 Updated Jun 24, 2025
  • gguf-parser-go Public

    Review/Check GGUF files and estimate the memory usage and maximum tokens per second.

    Go 176 MIT 18 0 0 Updated Jun 18, 2025
  • .github Public

    Meta-Github repository for all GPUStack repositories.

    Dockerfile 0 Apache-2.0 0 0 0 Updated Jun 11, 2025
  • HTML 0 1 0 0 Updated Jun 10, 2025
  • vox-box Public

    A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.

    Python 127 Apache-2.0 18 11 0 Updated Jun 8, 2025
  • fastfetch Public Forked from fastfetch-cli/fastfetch

    Like neofetch, but much faster because written mostly in C.

    C 1 MIT 533 0 0 Updated Oct 24, 2024
  • gguf-packer-go Public

    Deliver LLMs of GGUF format via Dockerfile.

    Go 13 MIT 3 0 0 Updated Oct 24, 2024

People

This organization has no public members. You must be a member to see who’s a part of this organization.