Skip to content
@neuralmagic

Neural Magic

Neural Magic (Acquired by Red Hat) empowers developers to optimize & deploy LLMs at scale. Our model compression & acceleration enable top performance with vLLM

Pinned Loading

  1. deepsparse Public archive

    Sparsity-aware deep learning inference runtime for CPUs

    Python 3.2k 189

Repositories

Showing 10 of 78 repositories
  • speculators Public
    Python 13 Apache-2.0 1 12 (3 issues need help) 7 Updated Jul 23, 2025
  • vllm Public Forked from vllm-project/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 13 Apache-2.0 8,976 0 8 Updated Jul 23, 2025
  • research Public

    Repository to enable research flows

    Python 1 0 0 3 Updated Jul 23, 2025
  • compressed-tensors Public

    A safetensors extension to efficiently store sparse quantized tensors on disk

    Python 138 Apache-2.0 18 5 25 Updated Jul 22, 2025
  • nm-actions Public

    Neural Magic GHA

    Python 0 Apache-2.0 0 0 4 Updated Jul 21, 2025
  • 0 0 0 1 Updated Jul 21, 2025
  • axolotl Public Forked from axolotl-ai-cloud/axolotl

    Go ahead and axolotl questions

    Python 0 Apache-2.0 1,100 0 5 Updated Jul 20, 2025
  • flashinfer Public Forked from flashinfer-ai/flashinfer

    FlashInfer: Kernel Library for LLM Serving

    Cuda 0 Apache-2.0 397 0 0 Updated Jul 18, 2025
  • DeepGEMM Public Forked from deepseek-ai/DeepGEMM

    DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling

    Python 0 MIT 654 0 0 Updated Jul 18, 2025
  • arena-hard-auto Public Forked from lmarena/arena-hard-auto

    Arena-Hard-Auto: An automatic LLM benchmark.

    Python 0 Apache-2.0 113 0 1 Updated Jul 16, 2025