New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed copying input tensor #20
Comments
Hi Carlos, from the output it looks as though you are not using the latest version of Coreferee. Please update your Python version to 3.9 and your spaCy and coreferee packages as well as the spaCy and coreferee models, then try again and let me know if the problem still occurs. Best wishes, Richard |
I upgrade spacy to version 3.1.0 and coreferee, i have python 3.8.1, I cant upgrade python in the container because I don't have access of admin, but the error continue. I would know what specifications have the machine for train coreferee (GPU) and how many files you use for train? |
Hi Carlos, unfortunately only Python 3.9 is supported and there is no way the latest version of Coreferee will work with Python 3.8. I believe the problem you are having with tensorflow is specific to the previous version of Coreferee. Perhaps you can download Python 3.9 and install it as a local user? There are no specific hardware requirements for training: the more hardware you have, the quicker training will be. Equally, the more training examples you have the better, although there is no specific minimum. |
Thanks, I upgrade python (3.9) and tensorflow (2.5.0) and the train work. How many sentence use for training model in English, I know that your model in English use 393.564 words for train language English, but how many sentence represent it? |
Hi, I get data for language spanish through of files *.conll, I tranform this data *.conll in format *.ann, and I try train coreferee with this data, after of change the rules for my langage. This data are 3.000 files approximately, but when I try train this model I get a error:
I think that is the memory. I have a GPU of 8 GB.
Note: I try with 100 files from my data and the train of coreferee work very well, but with a
with 3000 or 200 files I get the error.
Maybe, someone know about this error and which is the solution?. Thanks.
The text was updated successfully, but these errors were encountered: