Skip to content

Latest commit

 

History

History
43 lines (29 loc) · 2.96 KB

README.md

File metadata and controls

43 lines (29 loc) · 2.96 KB

ring-attention

Ring Attention leverages blockwise computation of self-attention on multiple GPUs and enables training and inference of sequences that would be too long for a single devices.

This repository contains notebooks, experiments and a collection of links to papers and other material related to Ring Attention.

Weekly Meeting

Every Sunday 5 PM UTC we meet in the "General" voice channel of the CUDA MODE discord server. You can contact us any time asynchronously in the #ring-attention channel.

Reserach / Material

Notebooks

Development References

How to contribute

Contact us on the CUDA MODE discord server: https://discord.gg/cudamode, PRs are welcome (please create an issue first).