Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

han_my_functions.py --> TypeError: float() argument must be a string or a number, not 'NoneType' #4

Open
vedtam opened this issue Jan 26, 2021 · 4 comments

Comments

@vedtam
Copy link

vedtam commented Jan 26, 2021

Hi,

First let me thank you for the detailed and really well explained HAN example! I was looking for days for such a source to get up and running with attention visualisation in NLP.

I have prepared my data as in the description, everything runs smoothly until I get to training: han.fit_generator(..., which stops and throws:

Screen Shot 2021-01-26 at 11 59 43

I've noticed that it has to do something with the metrics, but couldn't figure out what's next.

Btw, is there a specific version of Keras and Tensorflow I should run this example? Currently I'm on tensorflow: 2.4.1 and keras: 2.4.3 (both being probably the latest)

Thanks!!

@Tixierae
Copy link
Owner

Tixierae commented Jan 26, 2021

Hi, thank you for your interest! The code was tested with Python 3.5.5 and 3.6.1, tensorflow-gpu 1.5.0, Keras 2.2.0, and gensim 3.2.0. I guess it's probably the problem (Keras has been integrated into TF now).
I should regularly update the code, but don't have time. If you end up updating the code, feel free to make a pull request.

@vedtam
Copy link
Author

vedtam commented Jan 26, 2021

Thanks so much for the details. I've created an env with the above dependencies and now things work as expected. I've been trying to add my own data, which consists of 8 categories (instead of the default 5). My dataset contains 4000 training and ~380 test samples.

After preprocessing my data (using your preprocessor script), I can load the word vectors, train a model for analysing the results, but then another error pops up: max() arg is an empty sequence, with all the acc and loss plots being blank:

Screen Shot 2021-01-26 at 18 27 36

If I proceed with re-initialising and training a model to get a visualisation of the document embeddings I hit an error again: operands could not be broadcast together with shapes (8,8) (7,7) :

Screen Shot 2021-01-27 at 00 44 43

I've updated the n_cats=8 initialy and restarted the notebook several times, but it still throws about incompatible shapes (8,8) (7,7). I'm wondering, is this because of how the programatic batch creation? Maybe the some documents in a batch doesn't have the same size? Pff, can't figure it out.

@Tixierae
Copy link
Owner

Did you find what the problem was? It's difficult to troubleshoot this issue without a reproducible example, and I am very busy these days anyways, but could it be possibly due to your labels following a zero-based index? By default, they are assumed to follow a one-based index. Change this parameter if not:

one_based_labels = True # Remember to change this when changing dataset!E.g., should be False for IMDB.

@vedtam
Copy link
Author

vedtam commented Feb 9, 2021

@Tixierae thanks! Figured it out and got the notebook working. I'm wondering, why is there so little info (after days of searching found only your's and one more) about this approach for getting the weights over words and thus being able to explain an NLP deep learning model's behaviour? Is it really that obvious so anyone (but me) can implement it. Or is it already outdated, along deep learning NLP models (there might be a better way like transformers or something)?

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants