Skip to content

Pinned Loading

  1. vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 45.6k 7k

  2. llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 1.2k 120

Repositories

Showing 10 of 16 repositories
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 45,599 Apache-2.0 7,027 1,715 (16 issues need help) 596 Updated Apr 23, 2025
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    Python 63 BSD-3-Clause 1,637 0 11 Updated Apr 23, 2025
  • vllm-ascend Public

    Community maintained hardware plugin for vLLM on Ascend

    Python 513 Apache-2.0 99 101 36 Updated Apr 23, 2025
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    Python 21 Apache-2.0 11 23 (3 issues need help) 2 Updated Apr 23, 2025
  • ci-infra Public

    This repo hosts code for vLLM CI & Performance Benchmark infrastructure.

    HCL 8 22 0 6 Updated Apr 23, 2025
  • FlashMLA Public Forked from deepseek-ai/FlashMLA
    Cuda 5 MIT 830 0 0 Updated Apr 23, 2025
  • production-stack Public

    vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization

    Python 1,099 Apache-2.0 157 43 (2 issues need help) 19 Updated Apr 23, 2025
  • aibrix Public

    Cost-efficient and pluggable Infrastructure components for GenAI inference

    Jupyter Notebook 3,480 Apache-2.0 331 150 (11 issues need help) 8 Updated Apr 22, 2025
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 1,248 Apache-2.0 120 39 (9 issues need help) 40 Updated Apr 23, 2025
  • HTML 7 16 0 2 Updated Apr 20, 2025