You can clone with
IPython-0.11 complains when I try to create a parallel task when no engines are running. This makes sense for users who need results directly after they create a task. But there are also users who would like to create tasks in advance, before any engines are started. These tasks could have been stored in ipcontroller/backend database. When the first engine connects to ipcontroller it could start processing those tasks.
This is already true in trunk for load-balanced tasks. Obviously, you can't send jobs to particular engines that don't exist.
Thanks. This works with the 0.12 branch.
With IPython 0.12, I just noticed that I can't get information about tasks using rc.query_db() which where created when no engine was running until I start an engine. The start of the first engine seems to trigger ipcontroller to make that information available.
Is this intended?
I would assume that all tasks which haven't been queued to an engine would have a 'pending' status.
The TaskScheduler disables the on_recv callback from clients when there are no engines, because it is certain that tasks cannot be submitted. This means that they sit in the ØMQ queue, and do not trigger the callback that notifies the Hub, putting the request in the database. I imagine I can adjust it in such a way that the tasks are pulled into Python, instead of left in the buffer.
If you need further proof let me know. Can you reproduce this behaviour?
Sorry if I wasn't clear, I was trying to explain why you see what you are seeing. The Hub (the process that maintains the DB) will not get tasks submitted while there are no engines, because the TaskScheduler never calls recv, leaving messages in the upstream ØMQ buffer. Essentially the queue processing is paused slightly further upstream than you would like. I will look into addressing this.
Thank you for your explaination. Is there any workaround for this?
Modifying IPython/parallel/controller/scheduler.py made it work, but this is obviously not the solution:
--- ../ipython-git/IPython/parallel/controller/scheduler.py 2011-12-04 21:36:57.000000000 +0100
+++ IPython/parallel/controller/scheduler.py 2012-02-08 20:35:49.000000000 +0100
@@ -181,6 +181,7 @@
+ self.client_stream.on_recv(self.dispatch_submission, copy=False)
self._notification_handlers = dict(
registration_notification = self._register_engine,
unregistration_notification = self._unregister_engine
@@ -192,12 +193,12 @@
"""Resume accepting jobs."""
- self.client_stream.on_recv(self.dispatch_submission, copy=False)
+ #self.client_stream.on_recv(self.dispatch_submission, copy=False)
"""Stop accepting jobs while there are no engines.
Leave them in the ZMQ queue."""
# [Un]Registration Handling
I think that is approximately the solution you are looking for (I'll do a PR shortly). There certainly isn't a workaround with code as it is, because it is functioning exactly as designed - the Scheduler is entirely halted while no engines are registered. But with the changes to the scheduler supporting dependencies, etc., I think this should work fine with only minor tweaks.