-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Too Many Files Open" Error - Files Not Being Closed? #108
Comments
Yes that seems correct, glancing at the code, the Do you want to put together a pull request for this? |
would be awesome if you can include that code sample as a unit test |
I haven't used GitHub before aside from browsing through code and downloading projects, so please bear with me. What is it that you're asking/wanting me to do? |
It should take me a few minutes, so I can do it.. give me 5 min |
something is weird with my python installation – i'll take a look later |
Sorry for the late reply, but just wanted to say thank you very much for the fix! Just tested it out on my end and it's running without qualms here. So thank you again! |
I've been using the ANNOY Python API to build and save several index tables in order to load and reuse subsets of these in another script (at most about 400 in a subset at a time). The library is wonderfully fast, but I've been encountering "too many files open" errors after some time. I'm loading each index once, doing stuff with it, then unloading it before the loop continues onwards with the next index table.
Here is a partial code snippet that reproduces the error by reloading the same index table (510 x 64) over and over (run the loop multiple times if the error doesn't show up immediately):
After some sleuthing into the GitHub source code, I notice that unload() unmaps the file from memory, but doesn't close the file opened using load(). Would it be possible to get file closing functionality incorporated into the Python wrapper somehow please? Or am I missing something vital in using ANNOY?
Thanks for all the hard work on the API!
The text was updated successfully, but these errors were encountered: