Skip to content
This repository has been archived by the owner on Mar 31, 2019. It is now read-only.

Commit

Permalink
better docs for memory
Browse files Browse the repository at this point in the history
  • Loading branch information
justheuristic committed Aug 3, 2017
1 parent 4d8c86d commit c5803ba
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 3 deletions.
2 changes: 1 addition & 1 deletion agentnet/memory/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,5 +35,5 @@

from .gru import GRUMemoryLayer
from .logical import CounterLayer,SwitchLayer
from .attention import AttentionLayer
from .attention import AttentionLayer,DotAttentionLayer

4 changes: 2 additions & 2 deletions agentnet/memory/attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ class AttentionLayer(DictLayer):
- rnn/emb format [batch,seq_len,units] works out of the box
- 1d convolution format [batch,units,seq_len] needs dimshuffle(conv,[0,2,1])
- 2d convolution format [batch,units,dim1,dim2] needs two-step procedure
- step1 = dimshuffle(conv,[0,2,3,1])
- step2 = reshape(step1,[-1,dim1*dim2,units])
- step1 = dimshuffle(conv,[0,2,3,1])
- step2 = reshape(step1,[-1,dim1*dim2,units])
- higher dimensionality follows the same principle as 2d example above
- reshape and dimshuffle can both be found in lasagne.layers (aliases to ReshapeLayer and DimshuffleLayer)
Expand Down
2 changes: 2 additions & 0 deletions docs/modules/memory.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@ Augmentations

.. autofunction:: AttentionLayer

.. autofunction:: DotAttentionLayer

.. autofunction:: StackAugmentation

.. autofunction:: WindowAugmentation
Expand Down

0 comments on commit c5803ba

Please sign in to comment.