Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

treewide: require big-parallel on large, slow builds #120679

Merged
merged 5 commits into from Apr 26, 2021

Conversation

lukegb
Copy link
Contributor

@lukegb lukegb commented Apr 26, 2021

Motivation for this change

We have a few "large" packages (primarily compilers, web browsers, and statistical software) which take a long time to compile. To avoid them timing out, we would prefer that those where there's a decent benefit get scheduled onto a "big-parallel" Hydra builder instead, which allows them to use all of the cores available on the machine, rather than the 2 available on a standard builder.

This PR targets some specific packages where the compile times are >2h when run on a two-core builder, and <=~20m when on a big-parallel machine.

The goal (although I haven't thought about the scheduling theory too much...) is to try to keep the overall eval times down: if these packages are changed "alone", then building them on a more-capable machine means that the eval will complete (and thus, the channel will advance) faster. This also avoids these packages timing out (although only clickhouse is really close to that on x86-64).

Things done
  • Tested using sandboxing (nix.useSandbox on NixOS, or option sandbox in nix.conf on non-NixOS linux)
  • Built on platform(s)
    • NixOS
    • macOS
    • other Linux distributions
  • Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)
  • Tested compilation of all pkgs that depend on this change using nix-shell -p nixpkgs-review --run "nixpkgs-review wip"
  • Tested execution of all binary files (usually in ./result/bin/)
  • Determined the impact on package closure size (by running nix path-info -S before and after)
  • Ensured that relevant documentation is up to date
  • Fits CONTRIBUTING.md.

It takes 3h+ for a 2 core build, and 24m for a big-parallel build for
x86_64. For aarch64-linux, it times out with 2 cores.
It takes 7h+ on a "normal" 2-core-allocated Packet builder, and 20m on a
big-parallel machine.
This compiles in ~2h on a 2-core builder, and 10m on a big-parallel
machine.
This compiles in usually about 2h15m with a 2-core build, but about 10m
on a big-parallel machine.
Compiles in about 2h50m on a 2-core builder, and 20m on a big-parallel
machine.
@lukegb
Copy link
Contributor Author

lukegb commented Apr 26, 2021

Latest results are in:

ceph built in 13m (prev: 4h6m)
clickhouse built in 21m (prev: 7h55m)
aws-sdk-cpp built in 10m (prev: 1h43m)
pytorch built in 12m (prev: 2h16m)
qemu/qemu_full/qemu_kvm built in 16m/15m/4m (prev: 2h58m/2h16m/26m)

@lukegb lukegb deleted the big-parallel branch April 26, 2021 17:05
@SuperSandro2000
Copy link
Member

SuperSandro2000 commented Mar 8, 2022

Can we revert this change for aws-sdk-cpp? I just build aws-sdk-cpp on a 4-core machine with 32GB RAM which was already under 50% load in 2 minutes. I don't think that justifies as big-parallel when chrome takes 30+ minutes on a 64 core machine with 64GB.

@mweinelt
Copy link
Member

mweinelt commented Mar 8, 2022

6C12T (i7-8700K) with 32GB RAM.
6 minutes into the build earlyoom killed the build at 46%. Peak memory usage was apparently >20GB.

I disagree.

@SuperSandro2000
Copy link
Member

SuperSandro2000 commented Mar 8, 2022

Good catch! 🤔 I was building it as part of https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/package-management/nix/default.nix#L22-L36

Maybe we should only remove it as part of the override.

Edit: lets move the discussion to the PR you already found. #163313

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants