You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I train my LightFM model with no features for both users and products (I create a pure interaction matrix) based on implicit interactions. I score my model using mean average precision. In one of my experiments, I tried to add one feature for every user (one-hot encoded, so I added ~10 columns to the user features matrix - I built it using Dataset class). Unfortunately, after this operation, my score decreased about 2 times. Interestingly, when I set normalize=False in build_user_features method, the score returned to the previous value (approximately).
Am I doing something wrong or is it a well-known effect that the score drastically decreases after normalization? Why?
Parameters for my model: no_components=150, learning_rate=0.06, loss='warp'. I trained the model with ~750 000 users and ~200 000 products.
The text was updated successfully, but these errors were encountered:
Hi,
I train my LightFM model with no features for both users and products (I create a pure interaction matrix) based on implicit interactions. I score my model using mean average precision. In one of my experiments, I tried to add one feature for every user (one-hot encoded, so I added ~10 columns to the user features matrix - I built it using
Dataset
class). Unfortunately, after this operation, my score decreased about 2 times. Interestingly, when I setnormalize=False
inbuild_user_features
method, the score returned to the previous value (approximately).Am I doing something wrong or is it a well-known effect that the score drastically decreases after normalization? Why?
Parameters for my model:
no_components=150
,learning_rate=0.06
,loss='warp'
. I trained the model with ~750 000 users and ~200 000 products.The text was updated successfully, but these errors were encountered: