New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FIX '[Error 5] Access is denied' under Windows #355
Conversation
# Workaround occasional "[Error 5] Access is denied" issue | ||
# when trying to terminate a process under windows. | ||
sleep(0.1) | ||
if i + 1 == n_retries: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you not use a else clause if the for loop for this?
Should we try to get @krishnateja614 to test this? Maybe it is too much of a pain since it is happening on Azure ... |
Guys. I haven't had any problems after changing the timeout. If you want to test something out, I'll have access to the VM in the morning and can do so. |
Patching the python stdlib is not really the nicest of solution though ;-). If you could test this change and look whether you never get any warning, that would be great. |
@krishnateja614 I would really appreciate it if you could undo the change you did to the standard library and instead test if this change in joblib can fix the crash you observe on your azure configuration. |
I'll test this once I get access to the VM. So basically, |
@ogrisel , I got the same error after trying with the updated joblib code. Also the joblib import error occurs only after this multiprocessing\forking.py terminateprocess function fails. This update on Joblib doesn't change that right or is there anything else I should be doing?. Again, the access denied error is not occurring for all the values of CV. |
The simplest is if you go there: https://raw.githubusercontent.com/ogrisel/joblib/71cc0d96967f5090894d7ae3347e04d0d60281c0/joblib/pool.py save it as a file (Control-S on most browsers) and then copy this file into C:\Anaconda2\lib\site-packages\sklearn\externals\joblib\pool.py. |
Interesting that you got the warning: it means that you still have some processes that are slow to die in the background. But at least it does not seem to hurt your program anymore. Will merge. Thanks for testing the fix in your runtime environment! |
I don't know whether I should reopen this or not, but I have again ran into the same error as #354, the only difference is the dimensionality of the dataset. The error previously looked corrected when run on dataset of size 8,00,000 * 35 (I don't remember the number of columns I had exactly, but it's definitely less than 35) but now, I ran into the same error two days ago when running the model on 8,00,000 * 210 dataset. VM is same as before i.e Windows server 12 with 16 cores CPU & 116 GB RAM (AZURE D14_V2). |
This should fix #354.