New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
java.lang.ClassNotFoundException: edu.stanford.nlp.pipeline.StanfordCoreNLPServer #2
Comments
Place the contents of the directory "stanford-corenlp-full-2018-10-05", extracted from the parser downloaded in Step 7, into ./rule_based/parser/ |
Hi there, Thanks for the quick response. The server is running. But now I am this command on a separate terminal: python3 ./learning_based/paralleloie.py -i data/pubmedabstracts.json And I get this response: **Initializing Parallel Triple Extraction. Loading dependencies and dataset... Traceback (most recent call last): File "./learning_based/paralleloie.py", line 35, in
File "/usr/local/lib/python3.7/site-packages/allennlp/predictors/init.py", line 9, in
File "/usr/local/lib/python3.7/site-packages/allennlp/predictors/predictor.py", line 12, in
File "/usr/local/lib/python3.7/site-packages/allennlp/data/init.py", line 1, in
File "/usr/local/lib/python3.7/site-packages/allennlp/data/dataset_readers/init.py", line 10, in
File "/usr/local/lib/python3.7/site-packages/allennlp/data/dataset_readers/ccgbank.py", line 9, in
File "/usr/local/lib/python3.7/site-packages/allennlp/data/dataset_readers/dataset_reader.py", line 8, in
File "/usr/local/lib/python3.7/site-packages/allennlp/data/instance.py", line 3, in
File "/usr/local/lib/python3.7/site-packages/allennlp/data/fields/init.py", line 7, in
File "/usr/local/lib/python3.7/site-packages/allennlp/data/fields/array_field.py", line 10, in
File "/usr/local/lib/python3.7/site-packages/allennlp/data/fields/array_field.py", line 50, in ArrayField
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 88, in overrides
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 114, in _overrides
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 135, in _validate_method
File "/usr/local/lib/python3.7/site-packages/overrides/signature.py", line 93, in ensure_signature_is_compatible
File "/usr/local/lib/python3.7/site-packages/overrides/signature.py", line 288, in ensure_return_type_compatibility
TypeError: ArrayField.empty_field: return type It doesn't seem like anything from my side. |
Try: |
For this command: python3 ./learning_based/paralleloie.py -i data/pubmedabstracts.json I eventually get an error: Initializing Parallel Triple Extraction. Loading dependencies and dataset... Resource punkt not found.
For more information see: https://www.nltk.org/data.html Attempted to load tokenizers/punkt/PY3/english.pickle Searched in: |
|
Thanks! The final command: python3 ./rule_based/extract_refine.py -i extracted_triples_learning.csv Returns: [nltk_data] Downloading package stopwords to /Users/nony/nltk_data... |
Change line 361 of ./rule_based/extract_refine.py to: |
Still getting an error : [nltk_data] Downloading package stopwords to /Users/nony/nltk_data... |
Ok, the triples have been extracted, thanks. |
Error: Could not find or load main class edu.stanford.nlp.pipeline.StanfordCoreNLPServer
Caused by: java.lang.ClassNotFoundException: edu.stanford.nlp.pipeline.StanfordCoreNLPServer
.
.
.
when trying to run command: java -mx6g -cp "./rule_based/parser/*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 10000 -timeout 30000
The text was updated successfully, but these errors were encountered: