Pinned Loading
Repositories
Showing 10 of 44 repositories
- vllm Public Forked from vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
- axs2kiss Public
Automated [KRAI X](https://github.com/krai/axs) workflows for dedicated inference engines on selected backends: vLLM and SGLang on CUDA and ROCm, NIM on CUDA, using the OpenAI API compatible LoadGen client.
- kilt-mlperf Public
KILT (KRAI Inference Library Technology) - proudly powering some of the fastest and most energy efficient submissions in the history of MLPerf Inference
- axs2qaic-docker Public
Building Docker images for reproducing MLPerf Inference submissions with Qualcomm Cloud AI 100 accelerators