Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HPC executor #467

Open
tomwhite opened this issue May 21, 2024 · 2 comments
Open

HPC executor #467

tomwhite opened this issue May 21, 2024 · 2 comments
Labels

Comments

@tomwhite
Copy link
Member

Some possibilities:

cc @TomNicholas

@TomNicholas
Copy link
Collaborator

The main NCAR machine I have access to uses PBS - perhaps there is an equivalent of slurm job arrays for PBS?

@TomNicholas
Copy link
Collaborator

I've been looking into this more, and all of these look potentially interesting to us!

perhaps there is an equivalent of slurm job arrays for PBS?

There is, and it looks very similar. Perhaps a PBSExecutor and SlurmExecutor could both inherit from an abstract JobArrayExecutor...

I actually started sketching this out, borrowing heavily from dask-jobqueue, which defines common base classes for different HPC cluster classes.

But got stuck when I realised that:

  1. I need a way to propagate both the environment and the context of an arbitrary function (e.g. in apply_gufunc) to the bash scripts that are each job,
  2. It might be really slow to have to wait for the queuing system to start running the tasks between every stage. Ideally once we have been allocated resources we don't really want to release them - with cloud serverless this doesn't matter so much as the time between requesting a worker and getting one is extremely short, but on a HPC queue it could literally be hours. A lot of real-world workloads are going to require the most resources at the start - could we imagine that e.g. a reduction with 3 rounds gets 100 workers, then keeps hold of 10 for the next round, then keeps hold of 1 for the final round?

funcX: https://funcx.org/

funcX has recently become Globus Compute, a new offering from the same people who run the widely-used Globus file transfer service. The Globus Compute Executor class is a subclass of Python’s concurrent.futures.Executor, so presumably could be used similarly to the existing ProcessesExecutor?.

Globus compute requires an endpoint to be set up (i.e. on the HPC system of interest), and it's pretty new so I don't know if it will be setup anywhere yet. Then it requires a Globus account (is that free?). There is a very small public tutorial endpoint we could try using for testing though.

Parsl

This seems quite similar to Globus Compute, except that it uses a decorator-centric python API, a bit like Modal stubs. It also requires setup, but it's already available on two large machines we use at [C]Worthy (Expanse and Perlmutter), which I might be able to get access to (full list of machines here). Perlmutter has a nice docs page about how to use Parsl on their system.

It solves the function context issue by restricting to only knowing about local variables within the function - see docs.

This feature also sounds really useful:

[Parsl] avoids long job scheduler queue delays by acquiring one set of resources for the entire program and it allows for scheduling of many tasks on individual nodes.


I would like to try one or two of these out. If we can get Cubed to run reasonably well on HPC that would be a big deal, and worth advertising on its own.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants