Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve slurm service configuration #11870

Merged
merged 2 commits into from
Dec 25, 2015
Merged

Conversation

lsix
Copy link
Member

@lsix lsix commented Dec 21, 2015

In the current way slurm is configured, I cannot run user-space commands (such as sinfo) without having to specify where the slurm conf file is (it is not enough to specify it for the services).

I see 3 options so solve this:
— Always have slum read its configuration in /etc/slurm.conf (and maintain the config file with environment.etc."slurm.conf" I guess)
— Have the default environment define SLURM_CONF as a system-wide config (the only place I can think of is /etc/profile but I do not see a conviniant way to do it since it is produced in a bash related derivation)
— Generate in the store a folder containing the configuration and use as sysconfigdir when building slurm (but a new configuration implies a now folder to store it, and therefore a new derivation instanciation for slurm it-self)

In this PR, I opted for the 3rd option, but it will necessarily induce compilation for all final users when they will try to configure slurm. If another option should be preferred, please let me know.

@mention-bot
Copy link

By analyzing the blame information on this pull request, we identified @jagajaga to be a potential reviewer

@jagajaga jagajaga self-assigned this Dec 21, 2015
@jagajaga
Copy link
Member

I think it's better to do trick like here https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/x11/display-managers/slim.nix#L130. It will be like yours number 2.

@copumpkin
Copy link
Member

@lancelotsix if you need slurm utilities to know where the conf file is, you can use the existing wrapper machinery to wrap the individual utilities and set an environment variable for them

@lsix lsix changed the title Improve slurm service configuration [WIP] Improve slurm service configuration Dec 22, 2015
@lsix
Copy link
Member Author

lsix commented Dec 22, 2015

Thanks for the feedback. I'll change that ASAP.

@lsix lsix force-pushed the improve_slurm_service branch 4 times, most recently from bd7d6b7 to fe0487e Compare December 24, 2015 07:28
@lsix
Copy link
Member Author

lsix commented Dec 24, 2015

I pushed a wrapper-based update.

config = mkIf (cfg.client.enable || cfg.server.enable) {
config =
let
wrappedSlurm = pkgs.stdenv.mkDerivation {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Users will be able to run srun?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, if you look here, you see that cfg.package is not added to environment.systemPackages, only wrappedSlurm is. So unless the administrator adds pkgs.slurm-llnl to the system (and then generates conflicts), srun will be available via the wrapper script.

The test should show that it works (in this case root runs srun, I can improve it to have a regular user run srun if you prefer).

@lsix lsix changed the title [WIP] Improve slurm service configuration Improve slurm service configuration Dec 24, 2015
@jagajaga
Copy link
Member

👍

Please rename commit to slurm service: improve config, add tests

@lsix
Copy link
Member Author

lsix commented Dec 25, 2015

I separated it in 2 commits. One for the configuration update and one for the tests additions.

jagajaga added a commit that referenced this pull request Dec 25, 2015
@jagajaga jagajaga merged commit 7e14e28 into NixOS:master Dec 25, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants