Skip to content
This repository has been archived by the owner on Apr 25, 2023. It is now read-only.

Non linearity after attention #6

Merged
merged 3 commits into from Aug 28, 2018
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions model.py
Expand Up @@ -40,11 +40,11 @@ def forward(self, hidden, encoder_outputs):
h = hidden.repeat(timestep, 1, 1).transpose(0, 1)
encoder_outputs = encoder_outputs.transpose(0, 1) # [B*T*H]
attn_energies = self.score(h, encoder_outputs)
return F.softmax(attn_energies, dim=1).unsqueeze(1)
return F.relu(attn_energies, dim=1).unsqueeze(1)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And to leave this line alone


def score(self, hidden, encoder_outputs):
# [B*T*2H]->[B*T*H]
energy = self.attn(torch.cat([hidden, encoder_outputs], 2))
energy = F.softmax(self.attn(torch.cat([hidden, encoder_outputs], 2)))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe you need a ReLU here.

energy = energy.transpose(1, 2) # [B*H*T]
v = self.v.repeat(encoder_outputs.size(0), 1).unsqueeze(1) # [B*1*H]
energy = torch.bmm(v, energy) # [B*1*T]
Expand Down