Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Score decreases when features normalization enabled #413

Open
dmirecki opened this issue Feb 4, 2019 · 2 comments
Open

Score decreases when features normalization enabled #413

dmirecki opened this issue Feb 4, 2019 · 2 comments
Labels

Comments

@dmirecki
Copy link

dmirecki commented Feb 4, 2019

Hi,

I train my LightFM model with no features for both users and products (I create a pure interaction matrix) based on implicit interactions. I score my model using mean average precision. In one of my experiments, I tried to add one feature for every user (one-hot encoded, so I added ~10 columns to the user features matrix - I built it using Dataset class). Unfortunately, after this operation, my score decreased about 2 times. Interestingly, when I set normalize=False in build_user_features method, the score returned to the previous value (approximately).

Am I doing something wrong or is it a well-known effect that the score drastically decreases after normalization? Why?

Parameters for my model: no_components=150, learning_rate=0.06, loss='warp'. I trained the model with ~750 000 users and ~200 000 products.

@maciejkula
Copy link
Collaborator

This is a little unusual, but nothing I would worry about as long as you can find a solution through other settings.

@dbalabka
Copy link

This one probably related to #486

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants