You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have faced with the issue that normalization decrease model precision with the same epochs counts. The same problem already reported in LightFM repository: lyst/lightfm#413
As I understood, there no need to normalize one-hot encoded features (e.g., category).
The text was updated successfully, but these errors were encountered:
@dbalabka hi, thanks for the report. I'd say that it really depends on the data, as I have observed a vice versa effects in some cases. But I totally agree that it should be configurable. I have updated the LightFM wrapper with 2 new attributes for normalization:
@evfro thanks for response. It was a while after I have created this ticket. Our recommendations system already implemented using original LightFM code and we run another phase of AB testing. So, this issue became less relevant. Initliay we had an idea to use polara, but decided to use LightFM directly.
We are going to perform more experiments. I suspect that normalization issue related to one-hot encoding approach. There are some discussions in Scikit Learn repository about this topic
According to this code, LightFM features are normalized by default
polara/polara/recommender/external/lightfm/lightfmwrapper.py
Line 75 in 75ece1d
I have faced with the issue that normalization decrease model precision with the same epochs counts. The same problem already reported in LightFM repository:
lyst/lightfm#413
As I understood, there no need to normalize one-hot encoded features (e.g., category).
The text was updated successfully, but these errors were encountered: