Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
ValueError: too many file descriptors in select() #26
Hi, We are using Ztreamdy (v0.2) in Windows Server 2012 R2. The server has 296 streams, there are 3 publishers which sends RDF data each 3 seconds. There is one subscriber that reads data from all streams and process it. Ther server and the subscriber are deployed as a Window Service.
Some times (I didn't find any pattern) this error is raised in Ztreamy server:
It seems that 64 is the default limit for select() on Windows. In Python's select module it's raised to 512. I took from here: https://groups.google.com/forum/#!topic/python-tornado/oSbxI9X28MM
Any clue on this?
I'm sorry, but I haven't run into this problem before, since I've always run Ztreamy on Linux servers.
I've run through the Tornado source code to check that it uses
Looking at the link you've sent, it seems that the way to increase the 512 limit in Windows would be recompiling the Python interpreter. You would need to take the cpython 2.7 sources and set a larger value to
If moving your server to a Linux machine is out of the question, a workaround in your case would perhaps be adding a new relay stream that repeats the data from all the other streams, so that the client connects just to that stream. You would have just 1 file descriptor associated to your client instead of the 296 you now have.
In order to do that, create a new instance of
Please, let me know if I can be of any further help regarding this issue.
Thank you for the answer, It is very strange behaviour because this error only happens when one of the publishers sends data but for the other two works fine.
I took a look into the configuration of the publisher that had the error and I saw that it was sending data over streams that did not exist in the server. So, it was sending data over 10 streams but only 6 of them were configured in the server.
I changed the configuration of the server and now seems that works fine. I will let you know in order to close the issue.
I do not know if it can help to make the server more robust.