-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different plan()s for different futures #181
Comments
This is an interesting idea. If I understand you correctly, you're looking for a "super" backend where we can throw in all your available future backends and treat it as one big pool of compute resources. I'm happy that you've allowed yourself to even consider the possibility of such a setup - I take it as the Future API has lots of potentials and many yet to be discovered :) This is somewhat related to (non-official) ideas I have where the type of future to be used is not fixed when the Future object is created, but when it is launched. If that would be in place, one could imagine initiating a set of (lazy) futures that are ready to be launched. Only at the time of launch, the The closest we get to this today is that of a |
Yes, that is the gist. I would like to have several simultaneous plans and send any future to any plan in any order without duplicated overhead. I am a bit concerned about lumping all the plans together in a single overarching What about evaluators? What is the relationship between evaluators and plans? I tried going around |
I'm quite swamped now so I've unfortunately don't have much time to dive into your code, but is you're goal to be able to distribute work/tasks to different types of compute resources? For instance, some tasks (=futures) you'd like to run on a local machine, some on high-memory machines, and others on a small set of machines that have a certain NFS folder mounted? If so, I'm considering an Extended Future API (#172) that will support optional and/or mandatory resource requests in some standardized fashion, e.g. f <- future({ ... }, requires = c("mount:/data/folder/", "R (>= 3.3.0)")) and then there will be a generic underlying framework that will make sure that future will be launched on a backend worker that meets those requirements. Obviously, there's lots of work to get there. Before getting there, I am prioritizing formalizing the Core Future API and provide a generic conformance test framework such that any/all future backends can be validated against this Core Future API. I anticipate that this work will help define and explore what the Extended Future API could look like. About |
Distributing different workers/tasks to different types of computer resources is exactly what I am looking for. I had hoped to accomplish this by assigning different |
Update: going forward, I think I will be more focused on individual futures than |
Related: ropensci/drake#169. It would be amazing if a single call to
future_lapply()
could distribute simultaneous futures over a list of alternative pre-builtplan()
s. I am not quite sure about the interface, but I can picture how 5 futures might run on a local machine and another 5 might simultaneously go to SLURM.It may seem silly to juggle plans in a single call to
future_lapply()
, but it would be a huge help for drake.The text was updated successfully, but these errors were encountered: