Neural Magic
Neural Magic empowers developers to optimize and deploy LLMs at scale. Our model compression and acceleration enable top performance with vLLM.
Pinned Loading
Repositories
Showing 10 of 68 repositories
- gateway-api-inference-extension Public Forked from kubernetes-sigs/gateway-api-inference-extension
Gateway API Inference Extension
- model-validation-configs Public
- speculators Public
- compressed-tensors Public
A safetensors extension to efficiently store sparse quantized tensors on disk