-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add CI on LC Gitlab #582
Add CI on LC Gitlab #582
Conversation
LGTM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like the reorganized CI scripts! Thanks!
@@ -120,7 +120,7 @@ config: | |||
# - If set to 16 on a 4 cores machine `spack install` will run `make -j4` | |||
# - If set to 16 on a 18 cores machine `spack install` will run `make -j16` | |||
# If not set, Spack will use all available cores up to 16. | |||
build_jobs: 8 | |||
build_jobs: 64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this sets the max for all builds, if you can also change per platform by adding a config.yaml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed it to take advantage of the full node allocated to build, on both machines.
Yes, it’s the "max". So, if less cores are available (quartz 36), build will use all cores (-j 36), not 64.
Isn’t that OK of every machine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All cores would be bad if if someone builds on a login node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Limiting the number of cores is a shame in CI context. Since someone could potentially build on a login node on any machine, should I go back to 8 cores? I’m not even sure a process using all cores would be allowed on a login node.
Could we "encourage" people to build on allocated resource? I can document it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In other CI contexts (azure, travis) we have very limited resources, in other projects we have run out of ram even when using multi threaded builds.
You can always use a custom config for these CI tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point, and here is where I am in my reflexions:
- Definitely, 64 should no be applied to all contexts, as is the case right now
-> Use specific config for quartz and lassen - But, even on quartz and lassen, the 64 is not desirable when building on login node
-> Either no build on login node, or patch config.yaml on the fly for CI.
I we choose to patch on the fly, we don’t even need a specific config for lassen and quartz.
Lastly, I don’t see another way to customize spack config for CI context, because spack config is determined by uberenv using the machine info "only".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but the ci script is always running CI, so it could select a config-ci.yaml, or something?
Not picky about the CI solution, but i do want to avoid 64 (max) for all configs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
Thanks! I’m glad you like it! |
LGTM |
Failures are due to fetching instabilities in Spack. Failing tests passed with previous commit, and change introduced have no impact outside Gitlab CI, which passed. |
thanks! |
This PR is the "Add CI" part of #517.
Uberenv has been only slightly modified to fix a bug with mirroring.
The spack configuration has been updated based on Ascent one.
The failing jobs in CI are marked as "Allowed failure" when the cause is identified as beyond the scope of this PR.