New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
connectionError while parse_corpus #454
Comments
Did you use the run.sh script / download Stanford CoreNLP? |
Yes, the Standford CoreNLP has been download. Here is files n the parser folder: |
Was the parser running at all before this error? I.e. I see |
How to make sure the parser is running? |
You should see it printing out in the terminal where you ran the notebook?
|
Hi ajratner, the problem is solved after i "chmod" files in parser. Thanks. |
Hi ajratner, there is memory error when running parser. Here is the error information: Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x000000073655b000, 369131520, 0) failed; error='Cannot allocate memory' (errno=12) The question is how many memory the parser is need to run the tutorials? Thanks. |
hi,yejunbin,there is the same error "ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=12345)..." |
Closed in v0.5.0 |
hi, ajratner, I am facing same error "ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=12345)..." in v0.5.0 |
Hi @pinkal08cece any further details? Did you check the things noted above? Sorry for the delayed response! |
Thank you. Error is resolved. |
@pinkal08cece I meet the same problem,how did you solve it ? thank you ! |
Hi @pinkal08cece, @yejunbin, If you get a chance and could post what helped you resolve your issues in sufficient detail to reproduce, we'd be greatly appreciative! Then if relevant I will also add to the README Thanks! |
@ajratner Thank you!It is ok now!At first I did as yejunbin said to "chmod" all files in parser,but error still existed,when I changed to another computer,It was solved. |
Hi folks ! Just decided to tell that I am also stuck with the parser (seems I cannot load it due to some connection error). I have already tried chmod on the parser folder by : import subprocess However I receive the error below after about some 20 minutes of running time. P.S. trying to run the tutorials in Windows, Jupyter Notebook environment. ConnectionErrorTraceback (most recent call last) C:\Users\tigran\Anaconda3\envs\py2env\lib\site-packages\IPython\core\interactiveshell.pyc in magic(self, arg_s) C:\Users\tigran\Anaconda3\envs\py2env\lib\site-packages\IPython\core\interactiveshell.pyc in run_line_magic(self, magic_name, line) in time(self, line, cell, local_ns) C:\Users\tigran\Anaconda3\envs\py2env\lib\site-packages\IPython\core\magic.pyc in (f, *a, **k) C:\Users\tigran\Anaconda3\envs\py2env\lib\site-packages\IPython\core\magics\execution.pyc in time(self, line, cell, local_ns) in () C:\Users\tigran\Desktop\snorkel-master\snorkel\udf.pyc in apply(self, xs, clear, parallelism, progress_bar, count, **kwargs) C:\Users\tigran\Desktop\snorkel-master\snorkel\udf.pyc in apply_st(self, xs, progress_bar, count, **kwargs) C:\Users\tigran\Desktop\snorkel-master\snorkel\parser.py in apply(self, x, **kwargs) C:\Users\tigran\Desktop\snorkel-master\snorkel\parser.py in parse(self, document, text) C:\Users\tigran\Anaconda3\envs\py2env\lib\site-packages\requests\sessions.pyc in post(self, url, data, json, **kwargs) C:\Users\tigran\Anaconda3\envs\py2env\lib\site-packages\requests\sessions.pyc in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) C:\Users\tigran\Anaconda3\envs\py2env\lib\site-packages\requests\sessions.pyc in send(self, request, **kwargs) C:\Users\tigran\Anaconda3\envs\py2env\lib\site-packages\requests\adapters.pyc in send(self, request, stream, timeout, verify, cert, proxies) ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=12345): Max retries exceeded with url: /?properties=%7B%22annotators%22:%20%22tokenize,ssplit,pos,lemma,depparse,ner%22,%20%22outputFormat%22:%20%22json%22%7D (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x00000000094F32B0>: Failed to establish a new connection: [Errno 10061] No connection could be made because the target machine actively refused it',)) Thanks in advance ! Kind Regards, |
Is this resolved with the latest parser? @jason-fries ? |
@ajratner
In addition, I get an error printed in the command line at the same time:
I have tried chmod for all the files in the parser folder. However, it keeps showing me the same error. Would you please help me with it? |
Hi @neda-abolhassani , This looks like a memory issue? Do you have enough memory to run CoreNLP on the machine you're using? (@jason-fries any thoughts / ever seen this before?) |
Hi @neda-abolhassani,
|
Hi @jason-fries
However, I still get the same error when I am on py2Env kernel. I have also tried changing the kernel to python2. I got a bunch of dependency errors although I have installed the libraries in python-package-requirement.txt. After updating and installing all the required dependencies, I got the following error in the command window while running the corpus-parsing section:
It was bugging about the _htmlparser in the Jupyter Notebook. |
Your second error suggests that a CoreNLP instance is already running. I would make certain you've terminated all of your java processes and try the second approach again. |
@jason-fries The problem is the error shown in the command window is different than the error shown in Notebook. I have terminated all the processes but I get the same error in the command window and the Notebook says:
I have double checked and the folder exists in the directory: |
Your Jupyter notebook kernel might not match the environment where you installed your dependencies -- I would double check that first thing. I've never seen the missing urllib3 error before -- that suggests to me that something is off in your environment settings. You might also want to check and see that you can manually launch CoreNLP from the command line (see CoreNLP docs on how to do this). |
Hi @jason-fries |
@jason-fries I have even changed the heap size to 512m |
Hi! I am experiencing the same error, here is my output:
|
I was able to resolve this issue by running chmod on snorkel/parser and updating my JDK, thanks! |
@varun-tandon thanks for the tip! @neda-abolhassani let us know if that helps for you? |
Hi @ajratner |
Hm I've run snorkel on AWS instances before, not sure what's happening here |
I hit the same error as the original poster myself just now. I was running the In my case, re-running the cell in the notebook was sufficient to get it working. Perhaps this is a problem of the resource being requested before the server has finished starting up? |
Hm we'll look into this... we're also probably switching to using spacy
parser as default for the intro tutorial at least, in the release coming
out this week!
…On Fri, Jun 30, 2017 at 12:22 PM Adam ***@***.***> wrote:
I hit the same error as the original poster myself just now. I was running
the Intro_Tutorial_1 notebook. The StanfordCoreNLPServer had started and
was listed in the notebook terminal output — at least it was listed there
after the error occurred.
In my case, re-running the cell in the notebook was sufficient to get it
working. Perhaps this is a problem of the resource being requested before
the server has finished starting up?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#454 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABgw_bvMgG9jbWstDLYZu9HLYx1bZJbQks5sJUsLgaJpZM4KMXqF>
.
|
This should be closed in v0.6, re-open if not |
The text was updated successfully, but these errors were encountered: