-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server hangs after 10 minutes #101
Comments
To be sure run everything again. May be it is helpful. No need to wait :) also last logs from server:
|
Looks like a problem serializing complex numbers. This changed in python-driver 0.8.1. |
I think it can be not only one problem. I add logs from bblfsh to my previous message. Here is no error from the container. They are just started. So, I will try to reproduce it after python-driver the fix. |
If you see the last error in the server log and scroll to the right you can see:
|
(the server also probably should not hang on this or any other parsing errors, but let's fix this one first). |
Yes, @juanjux, I saw the error. But check the log output for the second try in this comment #101 (comment) there is no error |
Could you try again with the server in verbose mode?
|
Sure, after last logs with code.
And then hangs. |
Wow, no, It does not actually hang. It continues to produce
|
The error is probably unrelated, but I released v0.8.2 of the Python driver with the complex numbers serialization fix, could you try? The empty code error is produced when the server receives and a request with an empty code content, trough it shouldn't loop if there aren't new requests. |
Thank you @juanjux!
It was not empty because something happened and then bblfsh gives only empty UAST. And as I can see in the last logs |
Yes, I check it one time more. I attach tail of debug log just in case. |
Good news! I find a good reproducible example for you. and |
Update: from bblfsh import BblfshClient
BblfshClient("0.0.0.0:9437").parse("bad_file.py", language="Python", timeout=120) No responce, exit by timeout and nothing in bblfsh logs. P.S.: End of the file is strange but correct. Also, I think it is not only one example of "hanging" file in that collection. Does bblfsh server provide a timeout for drivers? |
Thanks for the effort to simplify debugging! I've checked that the Python driver crash with that file. The server also shouldn't hang because the driver crash, so we've probably two issues here, but lets leave things on this issue for not splitting the information too much. I'll continue investigating. |
Just want to mention that problem still alive with server v1.0.0 and latest driver. |
Yes, I could not work on this yesterday with all the pending releases, I'm working on it currently. |
I just did a PR that fixes the problem with that Next step for me is to do a micro test case with the same problem (just |
Python driver 1.0.1 (already tagged as latest so it should be automatically downloaded by the server) have a fix and also links vs a version of the SDK with another fix. With these together I can't reproduce this anymore so I don't think we need to release a new version of the server linking against the new SDK version (but I'm not 100% sure because the fixed part is also used by the server). @zurk could you please test again and report here if you still find the problem? |
@juanjux , Great news! However, I wonder about sever part, could something like that happen with some other "bad" drivers? |
There were really several problems. bblfsh/python-pydetector/pull/24 caused a crash in the Python driver when a numeric long literal like 0L was in the exported native AST dictionry. bblfsh/python-driver/pull/92 was logging on the stderr of the Python driver when any crash happened (drivers should not write to stderr, only communicate with the messages, the sdk merges the drivers stderr and stdout into a single stream) and finally bblfsh/sdk/pull/175 would cause incomplete reads if there was something written on stderr that shouldn't be. Other drivers would be "bad" by writing random stuff in stdout/stderr outside of the expected JSON communication but currently it would be easier to spot. I tough of adding a timeout on the SDK/server read but that could mask real problems so I'll maybe let that for a future, more solid, version of everything. Thanks for you excellent report! |
@juanjux please fix the links to the issues in the previous message, they're linking to server issues but they are actually issues in other repos. |
Fixed. |
I have a strange problem and it is hard to give you a short example how to reproduce the bug.
I really like to have one, but I can not find it.
The problem is that bblfsh server hangs after several minutes of work on
science-3
. The easy way to reproduce it onscience-3
will beIt will convert repos using
ast2vec
(last develop version(src-d/ml@973707e)) to asdf model filies. You need to wait 5-20 minutes and you will see in bblfsh logs something like:(Sometime it is just hangs without error).
and in
bblfsh_hang_client
containnerIt is because bblfsh hang.
But if you restart last command
python3 ./entry_pnt.py
it will produce the same warnings and nothing in bblfsh logs. May be It can be related to grpc problems, but I am not 100% sure.P.S.: I and @fineguy keep trying to find simple example without
ast2vec
usage. Something like this: https://gist.github.com/zurk/ad464aa73ad244980457dd2f09ff3abd#file-bblfsh_hang-py but it seems to work ok at least for short time. Also./entry_pnt.py
you can find in the same gist: https://gist.github.com/zurk/ad464aa73ad244980457dd2f09ff3abd#file-entry_pnt-py just in case.The text was updated successfully, but these errors were encountered: