New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
treewide: require big-parallel on large, slow builds #120679
Conversation
It takes 3h+ for a 2 core build, and 24m for a big-parallel build for x86_64. For aarch64-linux, it times out with 2 cores.
It takes 7h+ on a "normal" 2-core-allocated Packet builder, and 20m on a big-parallel machine.
This compiles in ~2h on a 2-core builder, and 10m on a big-parallel machine.
This compiles in usually about 2h15m with a 2-core build, but about 10m on a big-parallel machine.
Compiles in about 2h50m on a 2-core builder, and 20m on a big-parallel machine.
Latest results are in: ceph built in 13m (prev: 4h6m) |
Can we revert this change for aws-sdk-cpp? I just build aws-sdk-cpp on a 4-core machine with 32GB RAM which was already under 50% load in 2 minutes. I don't think that justifies as big-parallel when chrome takes 30+ minutes on a 64 core machine with 64GB. |
6C12T (i7-8700K) with 32GB RAM. I disagree. |
Good catch! 🤔 I was building it as part of https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/package-management/nix/default.nix#L22-L36 Maybe we should only remove it as part of the override. Edit: lets move the discussion to the PR you already found. #163313 |
Motivation for this change
We have a few "large" packages (primarily compilers, web browsers, and statistical software) which take a long time to compile. To avoid them timing out, we would prefer that those where there's a decent benefit get scheduled onto a "big-parallel" Hydra builder instead, which allows them to use all of the cores available on the machine, rather than the 2 available on a standard builder.
This PR targets some specific packages where the compile times are >2h when run on a two-core builder, and <=~20m when on a big-parallel machine.
The goal (although I haven't thought about the scheduling theory too much...) is to try to keep the overall eval times down: if these packages are changed "alone", then building them on a more-capable machine means that the eval will complete (and thus, the channel will advance) faster. This also avoids these packages timing out (although only clickhouse is really close to that on x86-64).
Things done
sandbox
innix.conf
on non-NixOS linux)nix-shell -p nixpkgs-review --run "nixpkgs-review wip"
./result/bin/
)nix path-info -S
before and after)