Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CI on LC Gitlab #582

Merged
merged 10 commits into from
Jul 7, 2020
Merged

Add CI on LC Gitlab #582

merged 10 commits into from
Jul 7, 2020

Conversation

adrienbernede
Copy link
Member

This PR is the "Add CI" part of #517.

Uberenv has been only slightly modified to fix a bug with mirroring.

The spack configuration has been updated based on Ascent one.

The failing jobs in CI are marked as "Allowed failure" when the cause is identified as beyond the scope of this PR.

@adrienbernede
Copy link
Member Author

LGTM

@coveralls
Copy link

coveralls commented Jul 3, 2020

Coverage Status

Coverage remained the same at 86.847% when pulling 18a9995 on feature/add-ci into 420c698 on master.

Copy link
Member

@cyrush cyrush left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like the reorganized CI scripts! Thanks!

@@ -120,7 +120,7 @@ config:
# - If set to 16 on a 4 cores machine `spack install` will run `make -j4`
# - If set to 16 on a 18 cores machine `spack install` will run `make -j16`
# If not set, Spack will use all available cores up to 16.
build_jobs: 8
build_jobs: 64
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this sets the max for all builds, if you can also change per platform by adding a config.yaml

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed it to take advantage of the full node allocated to build, on both machines.
Yes, it’s the "max". So, if less cores are available (quartz 36), build will use all cores (-j 36), not 64.
Isn’t that OK of every machine?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All cores would be bad if if someone builds on a login node.

Copy link
Member Author

@adrienbernede adrienbernede Jul 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Limiting the number of cores is a shame in CI context. Since someone could potentially build on a login node on any machine, should I go back to 8 cores? I’m not even sure a process using all cores would be allowed on a login node.

Could we "encourage" people to build on allocated resource? I can document it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In other CI contexts (azure, travis) we have very limited resources, in other projects we have run out of ram even when using multi threaded builds.

You can always use a custom config for these CI tests?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point, and here is where I am in my reflexions:

  • Definitely, 64 should no be applied to all contexts, as is the case right now
    -> Use specific config for quartz and lassen
  • But, even on quartz and lassen, the 64 is not desirable when building on login node
    -> Either no build on login node, or patch config.yaml on the fly for CI.

I we choose to patch on the fly, we don’t even need a specific config for lassen and quartz.

Lastly, I don’t see another way to customize spack config for CI context, because spack config is determined by uberenv using the machine info "only".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but the ci script is always running CI, so it could select a config-ci.yaml, or something?

Not picky about the CI solution, but i do want to avoid 64 (max) for all configs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@adrienbernede
Copy link
Member Author

I really like the reorganized CI scripts! Thanks!

Thanks! I’m glad you like it!

@adrienbernede
Copy link
Member Author

LGTM

@adrienbernede
Copy link
Member Author

Failures are due to fetching instabilities in Spack. Failing tests passed with previous commit, and change introduced have no impact outside Gitlab CI, which passed.
Merging.

@adrienbernede adrienbernede merged commit 9d654df into master Jul 7, 2020
@adrienbernede adrienbernede deleted the feature/add-ci branch July 7, 2020 16:42
@cyrush
Copy link
Member

cyrush commented Jul 7, 2020

thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants