New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
User / item embeddings Nan with large training set #130
Comments
What are the values in the feature matrices? Have you tried normalizing them (for example to be between 0 and 1)? |
Thanks for the response! I have yes, using the learn function - i.e. Here are the first rows from the item and user feature csr matrices: items_features[0,items_features[0].nonzero()[1]].todense()
matrix([[ 0.2189845 , 0.18301879, 0.29823944, 0.19250721, 0.62113589,
0.28761694, 0.4387733 , 0.15228976, 0.32908452]], dtype=float32)
members_features[0,members_features[0].nonzero()[1]].todense()
matrix([[ 0.01500955, 0.00687691, 0.00488463, 0.03807613, 0.01714612,
0.06524359, 0.01370857, 0.0203032 , 0.0091073 , 0.01899276,
0.0170573 , 0.03180252, 0.03951597, 0.03765749, 0.02067481,
0.00863998, 0.03003284, 0.010614 , 0.01699004, 0.02135187,
0.02568188, 0.02606232, 0.01938645, 0.06161183, 0.0126634 ,
0.01294042, 0.00720311, 0.030777 , 0.01884086, 0.01178526,
0.05592889, 0.02763181, 0.00907691, 0.01116292, 0.01343661,
0.01717991, 0.01464464, 0.00726902, 0.01353738, 0.00541887,
0.01728139, 0.01083446, 0.04138919, 0.01978991, 0.05642271,
0.00835726]], dtype=float32) |
Just for my understanding: what does your indexing Are you expecting these matrices to be dense? |
That's right, What I'm trying to do with that is go from items_features (a csr matrix of shape n_items, n_features) to a dense matrix of the values in the first row of that matrix, just to show the non-zero values are normalized between 0 and 1. items_features[0] gives: <1x2790 sparse matrix of type '<type 'numpy.float32'>' |
Cool, I understand. Can you try reducing the learning rate and/or reducing the scale of the nonzero items even further? |
You could try turning off regularization as well to try to narrow the problem down. |
Cool, will try those and come back to you. Cheers. |
You're also using a lot of parallelism: this may cause problems if a lot of your users or items have the same features. Let me know what you find! |
Any luck? |
Unfortunately not. I tried reducing the scale further (0-0.1), reducing the learning rate (several values), removing regularisation and running with only 4 threads. Strangely I can get a result with either only user or only item embeddings but not both. I'm not sure if this is a factor, but I have many more users than items (around 10x). |
Can you try the newest version (1.12)? It has numerical stability improvements which may resolve your problem. |
In the new version I get: ValueError: Not all estimated parameters are finite, your model may have diverged. Try decreasing the learning rate. Learning rate is 0.001 and have tried down to 0.00001. Have normalized features between 0-1, but also tried 0-0.1 and 0-0.01 My datasets look like: items_features members_features interaction_matrix |
Hmm. I may have to have a look at your code and data. Can you email me at lightfm@zoho.com? |
It would be best if you could reproduce the problem using synthetic data (or a subset of your data that you don't mind sharing). |
Is this still a problem? I'd really like to help if it is! |
Just had a chance to revisit this. When I recreate my matrices with random floats and ints, same value scale and same sparseness / shapes, I didn't encounter the same problem. After investigation I discovered a bunch of empty rows in my member / item features. It seems the model can handle a few, but in my case there were 700 or so, and that was enough to push parameters to infinity. Is this expected behavior? |
No, this shouldn't be the case. My first suspicion was that I don't zero the representation buffers, but they are zeroed. |
If you can construct a minimal test case that manifests this problem, I would be happy to have a look and solve this. |
Hi, sorry I was so slow on this. I've done a bunch more testing and found that when using very sparse factors for users and items, the learning rate needs to be very small to prevent divergence. This was an issue previously, possibly because of the numerical stability issues you mentioned? Anyway, after upgrading and retesting I can get the model to fit by adjusting the learning rate. Thanks! |
No worries, glad to see you found a solution. |
Hi Maciej, I have tried all these values for learning rate - [0.05, 0.025, 0.01, 0.001, 0.0001, 0.00001, 0.000001, 0.0000001] but still giving the same error. Please help ! |
Why I got all zeros in both of the embedding matrices [ |
Me, too. Even if with extremely little learning rate, the error still got popped. I can't figure out why. |
Apologies if this is user error, but I appear to be getting Nan embeddings from LightFM and I'm unsure what I could have done wrong. I followed the documentation, and have raise the issue on SO.
Basically I have a large data-set where collaborative filtering is working fine, but where user / item embeddings are provided, the model produces nan embeddings.
http://stackoverflow.com/questions/40967226/lightfm-user-item-producing-nan-embeddings
The text was updated successfully, but these errors were encountered: