Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split up this package #58

Open
andreasnoack opened this issue Feb 17, 2017 · 9 comments
Open

Split up this package #58

andreasnoack opened this issue Feb 17, 2017 · 9 comments

Comments

@andreasnoack
Copy link
Member

There is not much shared code between the managers and most of us only use a single workload/cluster manager so it is difficult to review PRs.

@azraq27
Copy link
Contributor

azraq27 commented Feb 20, 2017

That's a good point. Any code that actually is shared should probably be submitted to Base instead of keeping it here.

@amitmurthy
Copy link
Contributor

Should the split packages be with individual contributors or under JuliaParallel ? The maintainers of the separate cluster managers ought to be users of the specific manager.

@kleinhenz
Copy link

I just created SlurmClusterManager.jl if anyone is interested in giving it a try.

@kescobo
Copy link
Collaborator

kescobo commented May 22, 2020

@vchuravy
Copy link
Member

I just created SlurmClusterManager.jl if anyone is interested in giving it a try.

Requires that SlurmManager be created inside a Slurm allocation created by sbatch/salloc. Specifically SLURM_JOBID and SLURM_NTASKS must be defined in order to construct SlurmManager. This matches typical HPC workflows where resources are requested using sbatch and then used by the application code. In contrast ClusterManagers.jl will dynamically request resources when run outside of an existing Slurm allocation. I found that this was basically never what I wanted since this leaves the manager process running on a login node, and makes the script wait until resources are granted which is better handled by the actual Slurm queueing system.

Oh so much yes! ;)

@juliohm
Copy link
Collaborator

juliohm commented Oct 6, 2020

We are barely able to maintain a single repository with working versions of the managers. My opinion is that we should unite efforts and collect people with similar skills here to watch out for improvements made to particular managers. Also, from the user's point of view, it is annoying to have a different environment depending on where the script is to be run. Right now we can simply do ]add ClusterManagers and move on.

@juliohm juliohm closed this as completed Oct 6, 2020
@bjarthur
Copy link
Collaborator

@juliohm i disagree, and so do many others i think. my view is that clustermanagers.jl works as is, and so we should leave it be. if we want to make changes, then i would prefer to split it up instead of unifying the code base as you propose in #145. re-opening this issue.

@bjarthur bjarthur reopened this Oct 15, 2020
@juliohm
Copy link
Collaborator

juliohm commented Oct 15, 2020

You mean you agree that we should split this package into multiple packages for specific managers @bjarthur?

@mashu
Copy link

mashu commented Apr 16, 2024

Perhaps a common abstract interface should be put in place such that managers can use it? I was looking for SLURM manager, but it's very confusing which one should I use ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants