Skip to content

HL-hanlin/GKAT

Repository files navigation

Graph Kernel Attention Transformers (GKAT)

This repo contains the implementation of the GKAT algorithm in the paper From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers.

GKAT_description

Citation

If you think this project is helpful, please feel free to give a ⭐️ and cite our paper:

@inproceedings{choromanski2022block,
  title={From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers},
  author={Choromanski, Krzysztof and Lin, Han and Chen, Haoxian and Zhang, Tianyi and Sehanobish, Arijit and Likhosherstov, Valerii and Parker-Holder, Jack and Sarlos, Tamas and Weller, Adrian and Weingarten, Thomas},
  booktitle={International Conference on Machine Learning},
  pages={3962--3983},
  year={2022},
  organization={PMLR}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published