forked from vllm-project/vllm
-
Notifications
You must be signed in to change notification settings - Fork 0
A high-throughput and memory-efficient inference and serving engine for LLMs
License
megaease/vllm
ErrorLooks like something went wrong!
About
A high-throughput and memory-efficient inference and serving engine for LLMs
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published
Languages
- Python 84.8%
- Cuda 10.4%
- C++ 3.1%
- C 0.6%
- Shell 0.6%
- CMake 0.4%
- Dockerfile 0.1%