-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError related to nlp.max_length: wordwise 0.0.4 #4
Comments
Hey @gsalfourn, thanks for reporting this issue. You can access the spaCy model via extractor = Extractor()
extractor.nlp.max_length = 5000000 # or some big number, as long as you have RAM Let me know if this fixes the error for you. I'll try to think of a way to remedy this warning in the next patch release. |
thanks so much, that worked. Sorry to be a bother, but I have another question, so how would one go about
In the
that part may raise an error relating to length of tokens ( I don't know if it's possible to change
or to something that can take variable length tokens, based on the tokenizer in use. So
with a |
Hey @gsalfourn, thanks for the detailed discussion. Glad that the spaCy issue got sorted out. Here is my take on the discussion on the tokenizer:
The last solution seems like it would take time to implement, so I'll issue a fix with |
Closed via #7 for now. |
Was trying out the library, and run the following error
Just wondering where in the code to fix
nlp.max_length
The text was updated successfully, but these errors were encountered: