A collection of some attention/convolution operators implemented using PyTorch.
这里是一篇介绍它们的博客
WIP
git clone https://github.com/Renovamen/torchop.git
cd torchop
python setup.py install
or
pip install git+https://github.com/Renovamen/torchop.git --upgrade
-
Vanilla Attention
Neural Machine Translation by Jointly Learning to Align and Translate. ICLR 2015.
Effective Approaches to Attention-based Neural Machine Translation. EMNLP 2015.
-
Self-Attention, Simplified Self-Attention
Attention Is All You Need. NIPS 2017.
-
SAGAN Attention
-
External Attention
Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks. arXiv 2021.
-
Fast Attention (proposed in Fastformer)
Fastformer: Additive Attention Can Be All You Need. arXiv 2021.
-
Halo Attention (or Blocked Local Self-Attention)
Scaling Local Self-Attention for Parameter Efficient Visual Backbones. CVPR 2019.
-
LinAttention (proposed in Linformer)
Linformer: Self-Attention with Linear Complexity. arXiv 2020.
-
Selective Kernel (SK) Convolution
Selective Kernel Networks. CVPR 2019.
-
Involution
Involution: Inverting the Inherence of Convolution for Visual Recognition. CVPR 2021.
-
Squeeze-and-Excitation (SE) Block
Squeeze-and-Excitation Networks. CVPR 2018.
-
CBAM
CBAM: Convolutional Block Attention Module. ECCV 2018.
-
BAM
BAM: Bottleneck Attention Module. BMVC 2018.