Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HPC executor #467

Open
tomwhite opened this issue May 21, 2024 · 9 comments
Open

HPC executor #467

tomwhite opened this issue May 21, 2024 · 9 comments
Labels

Comments

@tomwhite
Copy link
Member

Some possibilities:

cc @TomNicholas

@TomNicholas
Copy link
Member

The main NCAR machine I have access to uses PBS - perhaps there is an equivalent of slurm job arrays for PBS?

@TomNicholas
Copy link
Member

I've been looking into this more, and all of these look potentially interesting to us!

perhaps there is an equivalent of slurm job arrays for PBS?

There is, and it looks very similar. Perhaps a PBSExecutor and SlurmExecutor could both inherit from an abstract JobArrayExecutor...

I actually started sketching this out, borrowing heavily from dask-jobqueue, which defines common base classes for different HPC cluster classes.

But got stuck when I realised that:

  1. I need a way to propagate both the environment and the context of an arbitrary function (e.g. in apply_gufunc) to the bash scripts that are each job,
  2. It might be really slow to have to wait for the queuing system to start running the tasks between every stage. Ideally once we have been allocated resources we don't really want to release them - with cloud serverless this doesn't matter so much as the time between requesting a worker and getting one is extremely short, but on a HPC queue it could literally be hours. A lot of real-world workloads are going to require the most resources at the start - could we imagine that e.g. a reduction with 3 rounds gets 100 workers, then keeps hold of 10 for the next round, then keeps hold of 1 for the final round?

funcX: https://funcx.org/

funcX has recently become Globus Compute, a new offering from the same people who run the widely-used Globus file transfer service. The Globus Compute Executor class is a subclass of Python’s concurrent.futures.Executor, so presumably could be used similarly to the existing ProcessesExecutor?.

Globus compute requires an endpoint to be set up (i.e. on the HPC system of interest), and it's pretty new so I don't know if it will be setup anywhere yet. Then it requires a Globus account (is that free?). There is a very small public tutorial endpoint we could try using for testing though.

Parsl

This seems quite similar to Globus Compute, except that it uses a decorator-centric python API, a bit like Modal stubs. It also requires setup, but it's already available on two large machines we use at [C]Worthy (Expanse and Perlmutter), which I might be able to get access to (full list of machines here). Perlmutter has a nice docs page about how to use Parsl on their system.

It solves the function context issue by restricting to only knowing about local variables within the function - see docs.

This feature also sounds really useful:

[Parsl] avoids long job scheduler queue delays by acquiring one set of resources for the entire program and it allows for scheduling of many tasks on individual nodes.


I would like to try one or two of these out. If we can get Cubed to run reasonably well on HPC that would be a big deal, and worth advertising on its own.

@mgrover1
Copy link

After some discussion at SciPy, globus-compute seems like a solid option for executing functions on HPC.

Here is a link to an cookbook using it, remotely submitting jobs to an HPC cluster at a DOE lab

It might be helpful to use this in addition to parsl, to allow more function-based resource specification, as they show on the docs here

Happy to discuss more with you all - I hope this provides a good starting point!

@TomNicholas
Copy link
Member

@jakirkham what was the library you were talking about earlier with the concurrent futures-like API but for HPC?

@jakirkham
Copy link

Was thinking of @applio's team's work on Dragon, which provides a multiprocessing style API

AIUI this could be used with concurrent.futures or Parsl (mentioned above). Though Davin would be able to say more about how to use it

https://github.com/DragonHPC/dragon

@negin513
Copy link

Essentially there are two pieces needed:

  • an endpoint or ssh config setup to access your hpc
  • orchestration and interactions with schedulers. I think Dragon can do this part too. Does Dragon require docker? If so that needs to be changed to singularity/ podman for hpc,

@TomNicholas
Copy link
Member

I chatted with @applio earlier and he also mentioned some kind of distributed dict, which we could potentially use as the storage layer via Zarr. Was that a part of dragon too @applio?

@jakirkham
Copy link

Ah forgot to mention that MPI4Py also contains concurrent.futures.Executors

@tomwhite
Copy link
Member Author

I added some notes on how to write a new executor in #498.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants