forked from Dao-AILab/flash-attention
-
Notifications
You must be signed in to change notification settings - Fork 65
Fast and memory-efficient exact attention
License
vllm-project/flash-attention
ErrorLooks like something went wrong!
About
Fast and memory-efficient exact attention
Resources
License
Stars
Watchers
Forks
Packages 0
No packages published
Languages
- Python 46.1%
- C++ 40.0%
- Cuda 13.0%
- CMake 0.7%
- C 0.1%
- Dockerfile 0.1%