Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Corrected the width of the circular convolution adjustment #4

Merged
merged 1 commit into from Mar 26, 2018
Merged

Conversation

JulesGM
Copy link
Contributor

@JulesGM JulesGM commented Mar 26, 2018

The code concats 2 elements to each side but only needs to concat 1 to each side.

Tested with the following code

import torch
import torch.nn
from torch.nn import functional as F
from torch.autograd import Variable
from random import randint

def _convolve_original(w, s):
    """Circular convolution implementation."""
    assert s.size(0) == 3
    t = torch.cat([w[-2:], w, w[:2]])
    c = F.conv1d(t.view(1, 1, -1), s.view(1, 1, -1)).view(-1)
    return c[1:-1]


def _convolve_new(w, s):
    """Circular convolution implementation."""
    assert s.size(0) == 3
    t = torch.cat([w[-1:], w, w[:1]])
    c = F.conv1d(t.view(1, 1, -1), s.view(1, 1, -1)).view(-1)
    return c


for i in range(10000):
    N = randint(10, 1000)
    w = Variable(torch.zeros([N]))
    torch.nn.init.uniform(w)

    s = Variable(torch.zeros([3]))
    torch.nn.init.uniform(s)


    assert (_convolve_original(w, s) == _convolve_new(w, s)).all()

Copy link
Owner

@loudinthecloud loudinthecloud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks wonderful, thanks.

Can you please rename the commit's title to something more meaningful?

@JulesGM JulesGM changed the title Update memory.py Corrected the width of the circular convolution adjustment Mar 26, 2018
The code concats 2 elements to each side but only needs to concat 1 to each side.

Tested with the following code

import torch
import torch.nn
from torch.nn import functional as F
from torch.autograd import Variable

def _convolve_original(w, s):
    """Circular convolution implementation."""
    assert s.size(0) == 3
    t = torch.cat([w[-2:], w, w[:2]])
    c = F.conv1d(t.view(1, 1, -1), s.view(1, 1, -1)).view(-1)
    return c[1:-1]

def _convolve_new(w, s):
    """Circular convolution implementation."""
    assert s.size(0) == 3
    t = torch.cat([w[-1:], w, w[:1]])
    c = F.conv1d(t.view(1, 1, -1), s.view(1, 1, -1)).view(-1)
    return c

for i in range(1000):
    N = 20
    w = torch.zeros([N])
    torch.nn.init.uniform(w)
    w = Variable(w)

    s = torch.zeros([3])
    torch.nn.init.uniform(w)
    s = Variable(s)

    print(i, (_convolve_original(w, s) == _convolve_new(w, s)).all())
@JulesGM
Copy link
Contributor Author

JulesGM commented Mar 26, 2018

Ok, I fixed the title. I will do the same with the other PR.

@loudinthecloud loudinthecloud merged commit 7ae872d into loudinthecloud:master Mar 26, 2018
@JulesGM JulesGM deleted the patch-1 branch March 26, 2018 17:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants