You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When fitting als.FMRegression with l2_reg_w=0 or l2_reg_V=0 I get the following assertion failure:
Assertion failed: (isfinite(new_V_fl) && "V not finite"), function sparse_fit, file ffm_als_mcmc.c, line 218.
Abort trap: 6
I note that older documentation actually has these as defaults but current values are 0.1. Regardless is this expected to fail? I am using pretty basic Movielens training data (ml1m)
There are a few situations where this is expected. For example if you have a feature that's zero for all examples. One could add input checks for this situations or just always use some regularization (could be very small).
Maybe adding a warning that using FM without regularization is not recommenced would be sensible way to deal with this issue.
I think that regularization is (nearly) always necessary for good performance.
I agree it may not be a great idea but given that it seems to be failing at an assertion I wondered if it might be possible to do it more gracefully (or less cryptically). Sorting out the cause is particularly challenging given that these values are given in one of the website examples where the cause isn't as clear as tuning parameters one by one.
The assertion is in the C code which makes error handling quite a bit difficult. I also don't like that the assert crashes python and am open to suggestions on how to do this better (best with PR 😄 ).
Thanks for pointing out the example, this definitely needs to be fixed #34
When fitting als.FMRegression with l2_reg_w=0 or l2_reg_V=0 I get the following assertion failure:
I note that older documentation actually has these as defaults but current values are 0.1. Regardless is this expected to fail? I am using pretty basic Movielens training data (ml1m)
The text was updated successfully, but these errors were encountered: