-
-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data received gets less and less. #45
Comments
This is a very regular error message after a while: |
Hi Flo! I think the first error is ok. 1006 is very common and handled by the lib! I think I know where the problem is happening, I was observing similar behaviour and its a new issue which was not there up to 1.9.x In 1.10.0 I started releasing methods for subscriptions which are using websocket.send() and I think the lib is catching a specific exception (no error trace in stdin) but the code doesnt handle it the right way. I will add more logging to the code on level critical, so we might get an insight then. Best regards! |
I gues its about this section of code: https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/blob/ff5fff1801baf518bbda746d365107f1773dab40/unicorn_binance_websocket_api/unicorn_binance_websocket_api_connection.py#L251
I set loglevels to critical, with level error its to noisy in my configuration. Maybe then we can see the reason and let the lib handle the restart. |
I remember that and from the view of the OS it seems to be right. But as I mentioned I am observing a similar issue with this script on a test server: https://github.com/oliver-zehentleitner/unicorn-binance-websocket-api/blob/master/dev_test_cex_full_non_stop.py All streams are fine all the time, but the mega stream is dying but remains as "running" in the stream_info output. Logfiles are not referring to a reason. So maybe its possible that a stream is becoming to big for any reason - yes we can handle it with an exception, but why its not happening with small streams in my setup, only the one big one... Till now we didnt create streams with a subscription list longer than 8004 chars. In my test i am using 22k chars i guess. I recognized a bug when sending a subscription via websocket.send() to binance. The binance websocket server splitted to big requests into smaller requests and was not able anymore to interpret the json structure, it always sent back a msg about invalid JSON syntax... so I decided to split the subscriptions into small JSON packages (max 350 items per subscription send) then it worked fine. Could be good to investigate if binance is sending a too request to us, maybe including a list of all subscriptions ... finally: I am not shure anymore if my thesis is really allways valid! Maybe its not good to use a single stream for every kline, but maybe its also good to make a stream not too big. |
Thanks I'll check out this version with the big and the single streams. Ill post the results here. |
Maybe this solves this issue: #47 (comment) |
Regarding to Binance API Changelog of 2020-04-23 (thx to @jemeraldo for reprting)
... we have two tasks:
|
I released version 1.12.0 on pypi. I think we can close this issue. |
Unfortunately the 1.12.0 update didnt fix this problem. I set up a test server that is running The first version was streaming < 1024 subscriptions per stream but i made multichannel streams (all kind of channels * subset of markets =<1024). Now I changed it to 1 kind of channel * all available markets (~740) and run the test again. |
I made a few tests and it seems to depend on how you create the streams! If I use one channel type mixed with all markets its fully stable. |
3 days streaming everything without a problem. Now 3 streams are still alive but 0 receives after a restart.
All 3 streams got restarted within the same second, but it looks like they never started streaming again. Error msg of all 3 streams on restart: No error msg received from binance. The log looks like, there are 3 frequent checks threads running, but it should be only one! The log file:example_stream_everything.py.log |
This very much aligns to what I am seeing. Streams are restarted from time to time and my received data becomes less and less. Thanks Oliver. I now know, that I still am sane ;) |
Your welcome :D I am implementing new methods for a better management and more options to debug this things: #62 I am on it ... |
Oh ok. Was about to answer. Sorry for the long delay! Will completely uninstall and then reinstall the dev0. Will start again and post log messages. |
Or should I install 1.16.2? |
the best is this: |
Ok. Installed and checked 1.6.2.dev. Running now. |
Error in my code. Restarting with unicorn-binance-websocket-api-1.16.3.dev0 |
so it worked for 19 days and the error was in your app not in the unicorn lib? |
nope. It crashed because out of memory due to unicorn not relaying the packets (I guess) first. Then I restarted (with the new version) and went on holyday. On the first day it crashed because stream_info['last_heartbeat'] was None while my code expected a float. |
Ok, ran for 4 days now. Now again RAM usage is increasing while data-amount saved is decreasing. I get the following errors from time to time: CRITICAL:root:BinanceWebSocketApiManager->stream_is_crashing(ba7d67...8890a) CRITICAL:root:BinanceWebSocketApiSocket->start_socket(ba7d67...8890a, ['depth@100ms'], ['mithbtc', ... 'ftmusdt']) Exception ConnectionClosed Info: code = 1006 (connection closed abnormally [internal]), no reason CRITICAL:root:BinanceWebSocketApiConnection->await._conn.aenter(ba7d67...8890a, ['depth@100ms'], ['mithbtc', ...'ftmusdt']) - OSError - [Errno -3] Temporary failure in name resolution ERROR:root:binance_websocket_api_connection->aexit(*args, **kwargs): AttributeError - 'Connect' object has no attribute 'ws_client' |
i think we solved a lot of bugs till now :D And i can still see 2 issues.
Would be really cool if someone could help me out with create a pyflame or backfire log the see the size of vars during the runtime. With this info its easy to solve problem 1. |
error 1006 happens and the lib is restarting the stream in this case... "OSError Temporary failure in name resolution" the operating system was not able to resolve the hostname from binance websocket endpoints (no internet connection, DNS down, ...) |
Ok merci :) will start test tomorrow. |
After 13 days I had an issue in. the depth@100ms stream received only 10 receives in average instead of ~1000 per second. I did set_restart_request(stream_id) and after the restart it worked fine again. I am investigating this! |
Jap. And I got a new error:
|
Errors are happening, I cant change that!!! What I can try is to catch them and handle them! The error you posted should have been catched and the stream should be restarted again! Isnt it? |
Sorry, there were not ment to complain, but merely as a - somewhat tiny - chance to guide to the source of the problem. Yes it did restart afterwards. However as you said, the RAM usage still increased. And not as in your case after 13 days, but after one day. |
Hi Florian! No worries! I didnt understand it as a complaint! I just thought its not a problem of unicorn lib, we handle it and we cant do anything more. But now I looked into the code and I see I was wrong :) I think the error is catched here: The error asyncio.base...blahblah... InvalidStateError is happening inside I will continue the investigation in a couple of days, I am in holiday mode :) Thanks for reporting it, even if I didnt recognized its value in the first reading! The RAM ... I need to know which objects got bloated or are bloating the RAM usage. There are a couple of ways to do that. Maybe within an iPython shell or with blackfire or pyflame. Are you able help with a research when its happening again on your system? That the RAM is growing over time is logical, because the lib is logging a lot of stuff, some vars are stacks which start cleaning after 500 entries for example. For sure thats something we can optimize. best regards, |
I'll try to do these pyflame things in a coup'la days! |
This is the root error msg:
I try to investigate what this open files are ... |
This are my open files of ubwa after a fresh start:
I will compare it after running a couple of days |
after 12 days the amount of open files didnt change. I guess this just happening during a lot and fast reconnects. |
I am not sure about this! |
It was a long time ago, but as I remember increasing CPU and/or using the buffer helped me out. I can recommend the buffer if you not already use. |
Hello again,
I report a bug. python version 3.7.5, pip 18.1, ubuntu 19.10 in a local docker container with unicorn binance 1.11.0.
I'm sorry, this is rather unspecific: I posted another issure before and attached a log file. There you stated, you would rather put all the streams of the symbols into one single stream in oppose to my seperating each and every stream. Then I did not rember why I seperated them in the first place. So I did unseperate them and now have two streams, one for depth, one for trade. Each containing all the symbols.
Something strange happens:
No exceptions. Everything seams ok. I collect all the message frames and pack them into sql files (one per day). I started off on the first day with a file of about 6-8GB. Second day 400MB, third 300, fourth 250MB. Less and less data is being packed into the sql files.
This (sql) system has worked before without this error, thats why I suspect the unicorn package (which otherwise is great).
Encountering this behavior in the past drove me into seperating the streams in order to check each streams
stream_info['status'] != 'running'
and also
time.time() - stream_info['last_heartbeat'] > self.stream_heartbeat_tolerance
.One (maybe) reason:
Is the 'running' flag set to running if at least one of the symbols in the stream is still being updated? Then it would yield a false sense of running. Same goes for the last heartbeat. This should better be the last heartbeat of the least updated symbol.
My suspicions may be totally off and I appologize for this rather unspecific 'bug'.
Thanks for any helf you can provide Oliver!
The text was updated successfully, but these errors were encountered: