Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add apptainer explicitly to the ldmx-env.sh script #1248

Open
8 of 9 tasks
tomeichlersmith opened this issue Jan 5, 2024 · 2 comments · May be fixed by #1269
Open
8 of 9 tasks

add apptainer explicitly to the ldmx-env.sh script #1248

tomeichlersmith opened this issue Jan 5, 2024 · 2 comments · May be fixed by #1269

Comments

@tomeichlersmith
Copy link
Member

tomeichlersmith commented Jan 5, 2024

Is your feature request related to a problem? Please describe.
@jpasc27 informed me that UVA has updated from singularity to apptainer on their cluster recently. This led to a discussion about including apptainer explicitly in the environment script so that folks don't accidentally use the old version of singularity. apptainer installs with a symlink named singularity which has prevented this from being an issue in the past. For example, at UMN:

eichl008@spa-cms017 ~> ls -alh $(which -a singularity)
lrwxrwxrwx 1 root root 9 Dec  6 04:15 /usr/bin/singularity -> apptainer
eichl008@spa-cms017 ~> singularity --version
apptainer version 1.2.5-1.el7 

I've already done this within denv #1232 so it should be pretty easy for me to do this here. I'm just putting notes here for now because I'm not sure if all of our institutions have moved to apptainer yet.

  • UMN
  • UVA
  • SLAC (x3)
  • CERN lxplus
  • UCSB
  • Lund
  • CalTech
  • FermiLab
  • ...other clusters I've forgotten about?

Describe the solution you'd like
This update is only necessary if both a relic singularity and a new apptainer are installed at once.
Here are the locations in the ldmx-env.sh script that (I think) would need to be updated (untested)

__ldmx_has_required_engine() {
if hash docker &> /dev/null; then
return 0
elif hash singularity &> /dev/null; then
return 0
else
return 1
fi
}

include an apptainer elif check before singularity

elif hash singularity &> /dev/null; then

have this be apptainer OR singularity because their CLI are so similar.

ldmx-sw/scripts/ldmx-env.sh

Lines 183 to 185 in a0a28a3

# change cache directory to be inside ldmx base directory
export SINGULARITY_CACHEDIR=${LDMX_BASE}/.singularity
mkdir -p ${SINGULARITY_CACHEDIR} #make sure cache directory exists

apptainer renamed this env var to APPTAINER_CACHEDIR, so to avoid a warning during some runs, we should change this name if the runner is apptainer.

Describe alternatives you've considered
An alternative is to switch to denv #1232 but that is a bigger change that just adding another runner explicitly.

@tomeichlersmith
Copy link
Member Author

tomeichlersmith commented Jan 15, 2024

Keeping track of versions floating around for compatibility. Many of these clusters have auto-updating enabled and so the version numbers may change. For our purposes, I'm mainly interested in which flavour1 of singularity is being used since that is the largest compatibility hurdle we currently face.

Institution Cluster Access2 Flavour1 Version
UMN SPA OSG native apptainer 1.2.5
UMN MSI native apptainer 1.2.5
SLAC S3DF native apptainer 1.2.5
SLAC SDF native singularity 3.8.7
SLAC Cent7 native apptainer 1.2.5
CERN lxplus native apptainer 1.2.5
UCSB pod module load apptainer apptainer 1.3.0-rc1 (reported as 1.1.5+357-g411e5c0)
UCSB pod module load singularity singularity 3.5.2
UVA Rivanna module load apptainer apptainer 1.2.2
Lund LUNARC native apptainer 1.2.4
CalTech native apptainer 1.2.5
FermiLab LPC native apptainer 1.2.5

Footnotes

  1. By "flavour" I mean the different forks of singularity. The original singularity (now moved to apptainer/singularity on GitHub) is called "singularity". The rebranding of this flavor to apptainer upon adoption by the Linux Foundation is called "apptainer". The sylabs/singularity fork of singularity is called "sylabs". 2

  2. By access, I mean how the flavour of singularity is accessed. Some clusters choose to install packages "natively" (not sure what the word is) such that the package is available immediately upon connection without any further environmental modification. Some clusters use a package loading system such that a specific command needs to be run in order to gain access to a flavour of singularity.

@tomeichlersmith
Copy link
Member Author

It looks like many of these clusters use module as a means for gaining access to apptainer. This could be a QoL feature for the ldmx-env.sh script to attempt module load apptainer if it isn't accessible yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant