From ac9d2aae8ad18908badbc84af150782493d6667a Mon Sep 17 00:00:00 2001 From: Loser Cheems Date: Tue, 5 Aug 2025 10:15:49 +0800 Subject: [PATCH] Updates citation format and adds acknowledgment Converts BibTeX citation to proper arXiv format with eprint ID and classification Adds OpenSeek to acknowledgments for kernel development support Includes additional author Liangdong Wang and Guang Liu in citation --- README.md | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index a182bbf..f66c42c 100644 --- a/README.md +++ b/README.md @@ -235,11 +235,14 @@ This project is licensed under the BSD 3-Clause License. See [LICENSE](LICENSE) If you use Flash-DMA in your research, please cite: ```bibtex -@misc{flash_dma_2025, - title={Trainable Dynamic Mask Sparse Attention}, - author={Jingze Shi and Yifan Wu and Bingheng Wu and Yiran Peng and Yuyu Luo}, - year={2025}, - url={https://github.com/SmallDoges/flash-dmattn} +@misc{shi2025trainabledynamicmasksparse, + title={Trainable Dynamic Mask Sparse Attention}, + author={Jingze Shi and Yifan Wu and Bingheng Wu and Yiran Peng and Liangdong Wang and Guang Liu and Yuyu Luo}, + year={2025}, + eprint={2508.02124}, + archivePrefix={arXiv}, + primaryClass={cs.AI}, + url={https://arxiv.org/abs/2508.02124}, } ``` @@ -247,6 +250,7 @@ If you use Flash-DMA in your research, please cite: This project builds upon and integrates several excellent works: +- **[OpenSeek](https://github.com/FlagAI-Open/OpenSeek)** - Kernel development support - **[Flash-Attention](https://github.com/Dao-AILab/flash-attention)** - Memory-efficient attention computation - **[NVIDIA CUTLASS](https://github.com/NVIDIA/cutlass)** - High-performance matrix operations library