Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why do I get better results with libfm? #28

Open
ibayer opened this issue Jan 14, 2016 · 8 comments
Open

Why do I get better results with libfm? #28

ibayer opened this issue Jan 14, 2016 · 8 comments
Labels

Comments

@ibayer
Copy link
Owner

ibayer commented Jan 14, 2016

Why do I get better results with libfm?

Be careful if you use a regression model with a categorical target, such as the 1-5 star rating of the movielens dataset.

libfm automatically clips the prediction values to the higest / lowest value in the training data.
This make sense if you predict ratings with a regression model and evaluate with RMSE.

For example, it's certainly better to predict a 5 star rating if the regression score is > 5 then the regression value.
With fastFM you have to do the clipping yourself, because clipping is not always a good idea.

But it's easy to do if you need it.

    # clip values                                                    
    y_pred[y_pred > y_true.max()] = y_true.max()                        
    y_pred[y_pred < y_true.min()] = y_true.min()

Why do I not get exactly the same results with fastFM as with libFM?

FMs are non-linear models that use random initialization. This means that the solver might end up in a different local optima if the initialization changes. We can use a random seed in fastFM to make individual runs comparable, but that doesn't help if you compare results between different implementations. You should therefore always expect small differences between fastFM and libFM predictions.

@merrellb
Copy link

I'm seeing some discrepancies between libfm and fastfm with movielens. Before diving into my observations can you confirm the following are equivalent:

/libFM -train ml1m_train.svml -test ml1m_test.svml -task r -dim '1,1,10' -iter 1000 -method mcmc
X_train, y_train = load_svmlight_file("ml1m_train.svml")
X_test, y_test = load_svmlight_file("ml1m_test.svml")

fm = mcmc.FMRegression(n_iter=0, rank=10)
fm.fit_predict(X_train, y_train, X_test)
for i in range(1000):
    y_pred = fm.fit_predict(X_train, y_train, X_test, n_more_iter=1)
    y_pred[y_pred > 5] = 5
    y_pred[y_pred < 1] = 1
    print(i, np.sqrt(mean_squared_error(y_pred, y_test)))

@ibayer
Copy link
Owner Author

ibayer commented Jan 29, 2016

I don't see a difference, but please check that the init_stdev parameter is the same.
Please have a look at my second comment above, this could explain your observation for small differences.

@arogozhnikov
Copy link

Hi, Immanuel!
I was comparing different LibFM implementations (I was testing MCMC for LibFM and FastFM in particular).

Unfortunately, the results of fastFM are not super optimistic
http://arogozhnikov.github.io/2016/02/15/TestingLibFM.html

Then I found this topic, so honestly I wasn't thinking about clipping values.
This trick may give some improvement in regression, but LibFM also easily wins in classification.

Maybe you know a reason? FastFM uses different priors or something else?

@chezou
Copy link
Contributor

chezou commented Feb 20, 2016

@arogozhnikov BTW, you need standardization especially for pyFM coreylynch/pyFM#3 (comment)

@arogozhnikov
Copy link

@chezou all the features are dummy (0-1) and table should be sparse. No, for the tests I am running this step neither needed nor possible.

@ibayer
Copy link
Owner Author

ibayer commented Feb 20, 2016

@arogozhnikov
Great comparison, I have a few suggestions that could make the evaluation even more useful for other people.

  1. Provide the exact version of the software that you are testing.
  2. You find that libFM is faster then fastFM; I fixed a runtime regression bug in ibayer/fastFM-core@d57a866 , is this still true for the most recent release?
  3. Use clipping to make the performance comparison more meaningful (it make quite a difference in some cases).
  4. You state for fastFM "supports linux, mac os (though some issues with mac os)" is this still true with the binaries that we now have?
  5. Make multiple runs with different seeds to give the reader an idea of the randomness in the results.

As is, I'm not convinced that libFM is faster and performs better then fastFM for MCMC regression. I have done less comprehensive comparisons for MCMC classification but the algorithm / prior should be the same in both libraries. I would be interested to look into it if you can clearly show that libFM dominates fastFM systematically for MCMC classification.

@arogozhnikov
Copy link

@ibayer
Thanks for comments.

  1. yup, you're right.
    2-3. ok, I'll give a try
  2. I don't have clean Mac OS (this was not trivial to install unfortunately) - but I asked a friend to try and seems pip install works fine on Mac OS. (Also, I see now that travis tests MacOS - so I'll remove this remark).
  3. This is hard part, will take forever and I don't see random seed in LibFM.
    For smaller tests I can just take different random subsets of data in training. Would this be enough convincing?

@ibayer
Copy link
Owner Author

ibayer commented Feb 28, 2016

@arogozhnikov It's possible to use a random seed with libFM.
"seed", "integer value, default=None"
https://github.com/srendle/libfm/blob/master/src/libfm/libfm.cpp#L93

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants