Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

java.lang.ClassNotFoundException: edu.stanford.nlp.pipeline.StanfordCoreNLPServer #2

Open
nondefo opened this issue May 3, 2022 · 9 comments

Comments

@nondefo
Copy link

nondefo commented May 3, 2022

Error: Could not find or load main class edu.stanford.nlp.pipeline.StanfordCoreNLPServer
Caused by: java.lang.ClassNotFoundException: edu.stanford.nlp.pipeline.StanfordCoreNLPServer
.
.
.
when trying to run command: java -mx6g -cp "./rule_based/parser/*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 10000 -timeout 30000

@OIELILLIE
Copy link
Owner

Place the contents of the directory "stanford-corenlp-full-2018-10-05", extracted from the parser downloaded in Step 7, into ./rule_based/parser/

@nondefo
Copy link
Author

nondefo commented May 3, 2022

Hi there,

Thanks for the quick response. The server is running. But now I am this command on a separate terminal:

python3 ./learning_based/paralleloie.py -i data/pubmedabstracts.json

And I get this response:

**Initializing Parallel Triple Extraction. Loading dependencies and dataset...

Traceback (most recent call last):

File "./learning_based/paralleloie.py", line 35, in

from allennlp.predictors.predictor import Predictor

File "/usr/local/lib/python3.7/site-packages/allennlp/predictors/init.py", line 9, in

from allennlp.predictors.predictor import Predictor

File "/usr/local/lib/python3.7/site-packages/allennlp/predictors/predictor.py", line 12, in

from allennlp.data import DatasetReader, Instance

File "/usr/local/lib/python3.7/site-packages/allennlp/data/init.py", line 1, in

from allennlp.data.dataset_readers.dataset_reader import DatasetReader

File "/usr/local/lib/python3.7/site-packages/allennlp/data/dataset_readers/init.py", line 10, in

from allennlp.data.dataset_readers.ccgbank import CcgBankDatasetReader

File "/usr/local/lib/python3.7/site-packages/allennlp/data/dataset_readers/ccgbank.py", line 9, in

from allennlp.data.dataset_readers.dataset_reader import DatasetReader

File "/usr/local/lib/python3.7/site-packages/allennlp/data/dataset_readers/dataset_reader.py", line 8, in

from allennlp.data.instance import Instance

File "/usr/local/lib/python3.7/site-packages/allennlp/data/instance.py", line 3, in

from allennlp.data.fields.field import DataArray, Field

File "/usr/local/lib/python3.7/site-packages/allennlp/data/fields/init.py", line 7, in

from allennlp.data.fields.array_field import ArrayField

File "/usr/local/lib/python3.7/site-packages/allennlp/data/fields/array_field.py", line 10, in

class ArrayField(Field[numpy.ndarray]):

File "/usr/local/lib/python3.7/site-packages/allennlp/data/fields/array_field.py", line 50, in ArrayField

@overrides

File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 88, in overrides

return _overrides(method, check_signature, check_at_runtime)

File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 114, in _overrides

_validate_method(method, super_class, check_signature)

File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 135, in _validate_method

ensure_signature_is_compatible(super_method, method, is_static)

File "/usr/local/lib/python3.7/site-packages/overrides/signature.py", line 93, in ensure_signature_is_compatible

ensure_return_type_compatibility(super_type_hints, sub_type_hints, method_name)

File "/usr/local/lib/python3.7/site-packages/overrides/signature.py", line 288, in ensure_return_type_compatibility

f"{method_name}: return type `{sub_return}` is not a `{super_return}`."

TypeError: ArrayField.empty_field: return type None is not a <class 'allennlp.data.fields.field.Field'>.**

It doesn't seem like anything from my side.
Could you let me know what the issue could be?

@OIELILLIE
Copy link
Owner

Try:
pip install overrides==3.1.0

@nondefo
Copy link
Author

nondefo commented May 4, 2022

For this command:

python3 ./learning_based/paralleloie.py -i data/pubmedabstracts.json

I eventually get an error:

Initializing Parallel Triple Extraction. Loading dependencies and dataset...
Done
Coreference resolution in progress...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 38869/38869 [3:11:42<00:00, 3.38it/s]
Done
Triple extraction in progress...
0it [00:00, ?it/s]
Traceback (most recent call last):
File "./learning_based/paralleloie.py", line 130, in
sent_text = nltk.sent_tokenize(text)
File "/usr/local/lib/python3.7/site-packages/nltk/tokenize/init.py", line 105, in sent_tokenize
tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
File "/usr/local/lib/python3.7/site-packages/nltk/data.py", line 868, in load
opened_resource = _open(resource_url)
File "/usr/local/lib/python3.7/site-packages/nltk/data.py", line 993, in open
return find(path
, path + ['']).open()
File "/usr/local/lib/python3.7/site-packages/nltk/data.py", line 701, in find
raise LookupError(resource_not_found)
LookupError:


Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:

import nltk
nltk.download('punkt')

For more information see: https://www.nltk.org/data.html

Attempted to load tokenizers/punkt/PY3/english.pickle

Searched in:
- '/Users/nony/nltk_data'
- '/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/nltk_data'
- '/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/share/nltk_data'
- '/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- ''


@OIELILLIE
Copy link
Owner

OIELILLIE commented May 4, 2022

$ python3
>>> import nltk
>>> nltk.download("punkt")

@nondefo
Copy link
Author

nondefo commented May 4, 2022

Thanks!

The final command: python3 ./rule_based/extract_refine.py -i extracted_triples_learning.csv

Returns:

[nltk_data] Downloading package stopwords to /Users/nony/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
Traceback (most recent call last):
File "./rule_based/extract_refine.py", line 361, in
inp = parser.parse_args().infile
NameError: name 'parser' is not defined

@OIELILLIE
Copy link
Owner

Change line 361 of ./rule_based/extract_refine.py to:
inp = ap.parse_args().infile

@nondefo
Copy link
Author

nondefo commented May 4, 2022

Still getting an error :

[nltk_data] Downloading package stopwords to /Users/nony/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
Traceback (most recent call last):
File "./rule_based/extract_refine.py", line 366, in
[(ln["subject"],ln["predicate"],ln["object"])])
TypeError: 'NoneType' object is not subscriptable

@nondefo
Copy link
Author

nondefo commented May 4, 2022

Ok, the triples have been extracted, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants