Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Substitute regular attention module with sofmax-free attention module #9

Open
Capchenxi opened this issue Nov 15, 2022 · 0 comments
Open

Comments

@Capchenxi
Copy link

Hello,

The background is that due to the limitation of the computation platform I'm using, where the softmax operator costs a lot of time, I'm trying to substitute the regular attention modules into sofmax-free attention module.

I have one question about the structure of SOFT. The core of the softmax-free attention module runs like this:

    def forward(self, X, H, W):

        Q = self.split_heads(self.W_q(X))
        V = self.split_heads(self.W_v(X))
        attn_out = self.attn(Q, V, H, W)
        attn_out = self.combine_heads(attn_out)

        out = self.ff(attn_out)
        return out

As Q and V are generated from X, does that mean this attention module is keen to a self-attention module rather than the cross-attention module where the Q, K, V are from different domains? If that is the case, is there any suggestion on regular cross-attention module substitution with softmax-free attention? Thanks.

Best,
Chenxi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant