Add support for creation of parallel task when no engine is running #826

kaazoo opened this Issue Sep 30, 2011 · 9 comments


None yet

2 participants

kaazoo commented Sep 30, 2011

IPython-0.11 complains when I try to create a parallel task when no engines are running. This makes sense for users who need results directly after they create a task. But there are also users who would like to create tasks in advance, before any engines are started. These tasks could have been stored in ipcontroller/backend database. When the first engine connects to ipcontroller it could start processing those tasks.

minrk commented Sep 30, 2011

This is already true in trunk for load-balanced tasks. Obviously, you can't send jobs to particular engines that don't exist.

kaazoo commented Oct 4, 2011

Thanks. This works with the 0.12 branch.

@kaazoo kaazoo closed this Oct 4, 2011
@kaazoo kaazoo reopened this Feb 6, 2012
kaazoo commented Feb 6, 2012

With IPython 0.12, I just noticed that I can't get information about tasks using rc.query_db() which where created when no engine was running until I start an engine. The start of the first engine seems to trigger ipcontroller to make that information available.

Is this intended?
I would assume that all tasks which haven't been queued to an engine would have a 'pending' status.

minrk commented Feb 6, 2012


The TaskScheduler disables the on_recv callback from clients when there are no engines, because it is certain that tasks cannot be submitted. This means that they sit in the ØMQ queue, and do not trigger the callback that notifies the Hub, putting the request in the database. I imagine I can adjust it in such a way that the tasks are pulled into Python, instead of left in the buffer.

kaazoo commented Feb 6, 2012

If you need further proof let me know. Can you reproduce this behaviour?

minrk commented Feb 6, 2012

Sorry if I wasn't clear, I was trying to explain why you see what you are seeing. The Hub (the process that maintains the DB) will not get tasks submitted while there are no engines, because the TaskScheduler never calls recv, leaving messages in the upstream ØMQ buffer. Essentially the queue processing is paused slightly further upstream than you would like. I will look into addressing this.

kaazoo commented Feb 8, 2012

Thank you for your explaination. Is there any workaround for this?

kaazoo commented Feb 8, 2012

Modifying IPython/parallel/controller/ made it work, but this is obviously not the solution:

--- ../ipython-git/IPython/parallel/controller/ 2011-12-04 21:36:57.000000000 +0100
+++ IPython/parallel/controller/    2012-02-08 20:35:49.000000000 +0100
@@ -181,6 +181,7 @@

     def start(self):
         self.engine_stream.on_recv(self.dispatch_result, copy=False)
+        self.client_stream.on_recv(self.dispatch_submission, copy=False)
         self._notification_handlers = dict(
             registration_notification = self._register_engine,
             unregistration_notification = self._unregister_engine
@@ -192,12 +193,12 @@

     def resume_receiving(self):
         """Resume accepting jobs."""
-        self.client_stream.on_recv(self.dispatch_submission, copy=False)
+        #self.client_stream.on_recv(self.dispatch_submission, copy=False)

     def stop_receiving(self):
         """Stop accepting jobs while there are no engines.
         Leave them in the ZMQ queue."""
-        self.client_stream.on_recv(None)
+        #self.client_stream.on_recv(None)

     # [Un]Registration Handling
minrk commented Feb 8, 2012

I think that is approximately the solution you are looking for (I'll do a PR shortly). There certainly isn't a workaround with code as it is, because it is functioning exactly as designed - the Scheduler is entirely halted while no engines are registered. But with the changes to the scheduler supporting dependencies, etc., I think this should work fine with only minor tweaks.

@minrk minrk closed this in 821fac2 Feb 9, 2012
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment