-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPython Cluster (SGE) Registration Timeouts #8569
Comments
At this point, I can start up the cluster reliably the first time. However, every time afterwards it doesn't work. So, I am revising my theory. Something gets generated and is not properly remove, which stop the cluster from registering the engines properly. |
I think the solution was provided by @minrk in an old mailing list, but I need to test this further. ( http://mail.scipy.org/pipermail/ipython-user/2011-November/008741.html ). I think it is lingering files in the profile's security folder that are causing the engines not to properly register. |
After some testing, I have determined that these lingering files in the security folder are, in fact, the cause of the problem. |
In particular, I propose the following. Once the cluster is shutdown |
Does this belong here ( https://github.com/ipython/ipyparallel )? |
@takluyver, wanted to bring this other SGE issue to your attention. I already know the solution, but am not sure where it belongs. Any direction you could give would be appreciated. |
Yes, it does make sense to do this on the new ipyparallel repo. On a clean shutdown, the connection files are already cleaned up. The controller doesn't have the opportunity to do this if it is brought down forcefully, though. |
Ok, I will move this once I'm back at my laptop. How do you mean forcefully? Currently, I am starting and stopping the iPython Cluster like this. I can provide the ipcluster config file if you wish.
|
Is the controller started with SGE as well, or is it started as a normal local process? When the controller is started with SGE, I believe |
Yes, the controller and engines are submitted to SGE. I see. So, it doesn't send a message to the engines to terminate. Does the controller wait until all of the engines are terminated? |
Moved to ipython/ipyparallel#21. |
I am trying to debug a situation where I am running into sporadic registration timeouts on the engine. In this Gist, I have included relevant config files and sample output ( https://gist.github.com/jakirkham/b0452178331db511dd0d ). All other config files were simply the result of running
ipython profile create --parallel --profile=sge
.To provide more information, this is on a CentOS 6.6 VM on a single machine; as such, there is no need to worry about accessibility between the jobs. The queue has 7 jobs in it in this case and has been configured to limit the number of running jobs based on the number of accessible cores. However, I have run into the same problem with less than 7 running jobs, as well. All of the jobs are able to start and run successfully. I don't believe this to be a resource issue as I have ran heavy duty machine learning algorithms in the VM without error repeatably.
As it is sporadic, I am wondering if timing differences between engines communicating to the controller could be causing the problem. For example, all the engines slam the controller at the same time leaving the controller unable to respond and this happens up to the timeout limit. Unfortunately, I have had trouble finding more information about parameters that could institute delays between engine queries or similar to test this hypothesis.
Any pointers would be appreciated?
The text was updated successfully, but these errors were encountered: