Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FactorizationMachine implementation and paper are different #22

Closed
JavanTang opened this issue Jun 9, 2020 · 2 comments
Closed

FactorizationMachine implementation and paper are different #22

JavanTang opened this issue Jun 9, 2020 · 2 comments

Comments

@JavanTang
Copy link

class FactorizationMachine(torch.nn.Module):

    def __init__(self, reduce_sum=True):
        super().__init__()
        self.reduce_sum = reduce_sum

    def forward(self, x):
        """
        :param x: Float tensor of size ``(batch_size, num_fields, embed_dim)``
        """
        square_of_sum = torch.sum(x, dim=1) ** 2
        sum_of_square = torch.sum(x ** 2, dim=1)
        ix = square_of_sum - sum_of_square
        if self.reduce_sum:
            ix = torch.sum(ix, dim=1, keepdim=True)
        return 0.5 * ix

image

What happened to v parameters?

@yfreedomliTHU
Copy link
Contributor

yfreedomliTHU commented Jun 9, 2020

class FactorizationMachine(torch.nn.Module):

    def __init__(self, reduce_sum=True):
        super().__init__()
        self.reduce_sum = reduce_sum

    def forward(self, x):
        """
        :param x: Float tensor of size ``(batch_size, num_fields, embed_dim)``
        """
        square_of_sum = torch.sum(x, dim=1) ** 2
        sum_of_square = torch.sum(x ** 2, dim=1)
        ix = square_of_sum - sum_of_square
        if self.reduce_sum:
            ix = torch.sum(ix, dim=1, keepdim=True)
        return 0.5 * ix

image

What happened to v parameters?

class FactorizationMachineModel(torch.nn.Module):
"""
A pytorch implementation of Factorization Machine.
Reference:
S Rendle, Factorization Machines, 2010.
"""

def __init__(self, field_dims, embed_dim):
    super().__init__()
    self.embedding = FeaturesEmbedding(field_dims, embed_dim)
    self.linear = FeaturesLinear(field_dims)
    self.fm = FactorizationMachine(reduce_sum=True)

def forward(self, x):
    """
    :param x: Long tensor of size ``(batch_size, num_fields)``
    """
    x = self.linear(x) + self.fm(self.embedding(x))
    return torch.sigmoid(x.squeeze(1))

@JavanTang, The implementation of FM is in fm.py, which is actually class FactorizationMachineModel. And Parameters V is implemented by FeaturesEmbedding.

@JavanTang
Copy link
Author

I see, thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants