New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running tasks with varying "core" requirements in same batch job #1324
Comments
Cross ref #1326 |
@annawoodard does the WorkQueueExecutor have the same basic functionality as the htex? That is, is wqex a superset of htex or, if not, what functionality will be given up? |
In regards to WorkQueue, each task can specify the number of cores required. However, the WorkQueue executor does not currently specify the number of cores per task, but it shouldn't be a hard feature to implement. With that being said, how will the Parsl app specify the number of cores it needs to run the task, when it is submitted to the WQExecutor? |
One possibility is to add it as a keyword argument to the decorator, for example: @python_app(cores=2)
def foo():
return 'Hello, world!' This would be defined here and here for the python and bash app decorators, which would pass it along to the app here. Then at call time before this we could just add it into the kwargs that will be serialized along with the function and passed to the |
as a reminder, we generally have tried to avoid mixing resource info and program/app info. I don't know if there's any way to not do this in the context of this issue, however |
If so, I would recommend: @python_app(resources = {cores=2})
def etc... as the list of resources may get long, and you may not want to have a super long list of attributes. |
@danielskatz I don't think my proposal above is in conflict with our 'write once, run anywhere' aspiration. If you know your task has fixed resource requirements, then I don't see the problem with saying so in the code-- that's not going to change. The thing that does change is where you are running it, and that is still nicely factorized out in the config. |
I think one can make the case that 'resources' are really closer to describing the app (kind of like an argument to malloc), rather than the 'resource' where the app will run. |
ok, ok ... |
@btovar I slightly favor keyword args because in my view it's a bit more natural to document the options, their types and defaults in the docstring:
which can be accessed in the interpreter via |
Ah yes, that makes sense. |
@TomGlanzman The main differences that come to mind are are 1) at the moment WQ is not pip installable, so you would need to do that as a separate step (but my understanding is that it will be very soon), and 2) wqex was added recently so while WQ itself is mature and robust software, there may be a few kinks to iron out with the executor because it is so fresh it hasn't been extensively tested 'in the wild' yet. |
Thanks @annawoodard. Is the suggestion that I attempt to migrate to the wqex at some point or that some of its functionality will be incorporated into the htex? (It is not clear to me how wqex might be used at NERSC.) |
This is great functionality! With our use case we would need to set this on a per task basis not a per function basis. A simple example would be if I had a function the would GEMM different sized matrices, I would want to allocate more cores to larger matrices. |
I've been working on implementing |
The bare-bones implementation can be found here. After testing, it seems to be working with the Work Queue executor, but as I mentioned above, the changes to If this is of any use, we can also implement other resource options to the decorator, as Work Queue tasks can specify the amount of memory and disk allocation needed for execution as well. |
@tjdasso that is fantastic! @btovar correctly pointed out at ParslFest that my recommendation was probably not quite careful enough because it could result in collisions with user kwargs passed to the app. I think the super careful way to deal with that would be to add another dictionary
you could have for example (thinking ahead to adding the other resources),
(i.e., going with something like @btovar's original recommendation, just hiding it from the user in order to simplify the documentation.) I think besides that tweak what you have implemented is great and we should add mem and disk and open a PR. I don't think we need to worry much about side effects to the other executors, as they can always just ignore the additional kwargs. |
@TomGlanzman Yes the idea here would be to try wqex. @tjdasso and others in the CCL team are working currently on getting WQ pip-installable.
Sorry I missed this earlier @dgasmith. That's a really good point. I don't love our existing 'magic' keyword args ('stdout', 'stderr', etc) because I think people find it confusing, but your use case makes a lot of sense. @tjdasso in light of @dgasmith's use case (and just to be more careful to avoid clobbering), perhaps it's sufficient to just check if |
Ok, I implemented all three resource requirements (mem, cores, and disk) as options to the decorator, or as keyword argument |
tagging @josephmoon, who has a related use case |
I have been following this discussion (and #1358 ) with interest and have a couple of questions:
Thanks for your interest in this topic, |
Speaking from the WQ side of things: 1 - It's close, but we aren't there yet. We need to propagate the properties down to the WQ layer and then exploit them. 2 - Work Queue does not do currently do anything about the time property. However, it would not be hard to have the WQ worker accept an "end time", propagate that back to the master, and then skip scheduling tasks that run over the end time. I think the tricky part would be getting accurate info propagated from the scheduler. |
@tjuedema is going to take over from @tjdasso on this issue (from the WQ side of things, anyhow.) @tjuedema please look into getting the cores, memory, and disk from decorated Parsl tasks, and then pass them down to the WQ executor, so that each task can then be labelled appropriately. With that info in place, WQ should automatically "pack" multiple tasks into a single worker. |
closing this as the |
It would be beneficial to run tasks with different core_per_worker requirements within a single batch job. The motivation is to run a heterogeneous set of tasks (tasks with differing core requirements) on the same compute node at NERSC. I have a workflow that generates many tasks (identical code, different data) to run under the same (htex) executor so that the tasks run in the same batch job. Each task, in general, needs a different number of cores. The Cori machine has batch "Haswell" nodes with either 32 cores (and 64 hw threads) or 68 cores (and 272 hw threads). To efficiently utilize the node, one must be able to keep as many core busy as possible.
This request is to support the ability for the user to specify the number of needed cores at task creation time and to have the appropriate bookkeeping performed to avoid oversubscribing a node.
For "SimpleLauncher", this would mean Parsl would have to handle bookkeeping (i.e., #cores available vs in-use) For "SrunLauncher", srun would presumably do the bookkeeping (potentially across multiple nodes).
The text was updated successfully, but these errors were encountered: