From 022f965a783540f228124760da7f7eb75a848aca Mon Sep 17 00:00:00 2001 From: Anne Ouyang Date: Tue, 3 Dec 2024 18:27:21 -0800 Subject: [PATCH] Add blog citation --- _blogs/kernelbench.md | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/_blogs/kernelbench.md b/_blogs/kernelbench.md index 9fef8bd1..324f67dc 100644 --- a/_blogs/kernelbench.md +++ b/_blogs/kernelbench.md @@ -64,7 +64,7 @@ As models grow larger and become more embedded into our daily lives, having fine ## Big O is not all you need. In algorithm classes we are taught to view Big O as the gold standard for measuring the efficiency of algorithms. In ML research, new model architectures may have better theoretical complexity, implying they should outperform traditional architectures in speed or efficiency, but when it comes down to real-world performance, these newer models can struggle to keep up with established architectures. - + *(Meme credit to Michael Zhang)* @@ -360,4 +360,14 @@ Many design patterns and optimizations are reusable across GPU kernels –– fu # Acknowledgements -We would like to thank Aaryan Singhal, AJ Root, Allen Nie, Anjiang Wei, Benjamin Spector, Bilal Khan, Bradley Brown, Dylan Patel, Genghan Zhang, Hieu Pham, Hugh Leather, John Yang, Jon Saad-Falcon, Jordan Juravsky, Mark Saroufim, Michael Zhang, Ryan Ehrlich, Sahan Paliskara, Sahil Jain, Shicheng (George) Liu, Simran Arora, Suhas Kotha, Vikram Sharma Mailthody, and Yangjun Ruan for insightful discussions and constructive feedback in shaping this work. We would also like to thank SWEBench for its inspiration and reference, which greatly contributed to the development of this work. \ No newline at end of file +We would like to thank Aaryan Singhal, AJ Root, Allen Nie, Anjiang Wei, Benjamin Spector, Bilal Khan, Bradley Brown, Dylan Patel, Genghan Zhang, Hieu Pham, Hugh Leather, John Yang, Jon Saad-Falcon, Jordan Juravsky, Mark Saroufim, Michael Zhang, Ryan Ehrlich, Sahan Paliskara, Sahil Jain, Shicheng (George) Liu, Simran Arora, Suhas Kotha, Vikram Sharma Mailthody, and Yangjun Ruan for insightful discussions and constructive feedback in shaping this work. We would also like to thank SWEBench for its inspiration and reference, which greatly contributed to the development of this work. + +# Citing +```bibtex +@misc{ouyang2024kernelbench, + title={KernelBench: Can LLMs Write GPU Kernels?}, + author={Anne Ouyang and Simon Guo and Azalia Mirhoseini}, + year={2024}, + url={https://scalingintelligence.stanford.edu/blogs/kernelbench/}, +} +``` \ No newline at end of file