Skip to content

Pytorch implementation of FNet: Mixing Tokens with Fourier Transforms by Google Research

License

Notifications You must be signed in to change notification settings

vgundecha/fnet-google-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

fnet-google-pytorch

Pytorch implementation of FNet: Mixing Tokens with Fourier Transforms by Google Research

This paper replace the Self-Attention block in the Transformer architecture with a Fourier Transform

Basic idea

  • Attention is a mechanism for facilitating interaction between tokens in a sequence (mixing tokens)
  • This paper mixes tokens using Discrete Fourier Transform.
  • Advantage:
    • DFT is unparametrized (the basis/weights are fixed, drastic reduction in num. of params.)
    • Computing DFT using FFT is extremely fast.

Usage

 import torch
 from fnet import Fnet
 # N = number of layers, dhidden = input embedding size
 model = Fnet(N=2, dhidden=32)
 model = model.train(False)
 # Input embedding (not included in the model): batch_size=2, sequence_length=8, dhidden=32
 x = torch.randn((2, 8, 32))
 y = model(x) # y.shape = (2, 8, 32), model representation without the output projection

Citation

@misc{leethorp2021fnet,
     title={FNet: Mixing Tokens with Fourier Transforms}, 
     author={James Lee-Thorp and Joshua Ainslie and Ilya Eckstein and Santiago Ontanon},
     year={2021},
     eprint={2105.03824},
     archivePrefix={arXiv},
     primaryClass={cs.CL}
}

About

Pytorch implementation of FNet: Mixing Tokens with Fourier Transforms by Google Research

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages