Skip to content

WM runtime environment

Alan Malta Rodrigues edited this page Aug 29, 2023 · 3 revisions

Overview

This wiki documents the worker node environment required and used by WM central production (and T0) jobs.

Current runtime environment

The WMAgent stack currently depends on the cmsdist repository, where the branch comp_gcc630 is used to build the CMS services using the legacy RPM system. With that, WMAgent is currently supported for:

  • SL7 (Scientific Linux 7)
  • x86_64 (amd64)
  • GCC compiler 6.3.0
  • Python 3.8.2
  • Python future library 0.18.2 (but it has no effect in a python3 stack, code still depends on it though)

This stack has been deployed in CVMFS (by the CMSSW team and Shahzad) under the following path:

/cvmfs/cms.cern.ch/COMP/slc7_amd64_gcc630/

Note that most of the other OS + ARCH areas actually come from a symlink made to a given CMSSW area, thus potentially changing the version of python 3 (3.6.4 for RHEL6, 3.8.2 for RHEL7 and RHEL8, 3.9.6 for RHEL9). This is clearly not ideal because we do not have full control of the python environment used on the worker nodes, which is currently dependent solely of the ScramArch requested by a workflow/job.

Future runtime environment

The WM team is aware that a more controlled runtime environment is required, especially because we do not have CI pipelines against all the existent CVMFS areas. Another goal with this refactoring is to be able to provide other libraries that might be required at the worker node runtime.

We have been discussing internally among the team and with Shahzad and the following 3 candidates have been proposed in order to have a more controlled and predictable runtime environment:

  1. CVMFS approach: given that CVMFS is available in every single worker node, we could select a specific path to hold the WM runtime environment and the CMSSW team (Shahzad) would help us compiling the required packages and deploying them in CVMFS. Note that we need to build/deploy this environment for each OS + ARCH combination though, so currently 12 different areas.
  2. Singularity approach: we initially thought that a single singularity image was available for each OS, regardless of the CPU architecture. However, Shahzad clarified that we actually have different images for each combination of OS + ARCH. Plus there is a mix of CMSSW and Grid images, with a desire to replace the grid ones by those built for CMSSW. This change could potentially affect GlideinWMS and Scram, as both projects depend on Python3.
  3. Wheels package: keep using the current CVMFS setup, but start sending any extra third-party libraries from the agent to the worker node. This could be done by fetching and sending the wheel package (e.g. for psutil) and adding it to the PYTHONPATH in the worker node. Of course, this incurs in extra networking between the agent (schedd) and the worker node, with an average of .5 million transfers every day.

After discussing pros and cons, we decided to proceed with the CVMFS approach. With that model, we will be able to use the same Python version among RHEL 7/8/9; with a slightly downgrade for RHEL6. In addition to the OS, a specific area in CVMFS will have to be created for each of the 3 CPU architectures: amd64, aarch64 and ppc64le. The CMSSW team (Shahzad) has all the required tooling for building and deploying this stack in the required OS+ARCH setup.

The next steps in order to materialize this are:

  1. we need to specify which CVMFS path we would like to use
  2. define which python3 version to use for RHE6 (we likely cannot go beyond 3.6.x)
  3. define which python3 version to use for RHLE 7/8/9
  4. define which third-party python libraries are required
  5. define a cmsdist branch for these changes (as python3 needs to be compiled)
  6. update the WMCore submit_py3.sh script to start using this new area.

Code shipped from WMAgent to the worker node

For each grid job, a job sandbox is transferred from WMAgent/schedd to the worker node. The sandbox file is created by the WorkQueueManager component whenever a workflow gets acquired by the agent. It is compressed and named after the workflow name, e.g.: amaltaro_TC_EL8_JSON_Agent222_Val_230725_201934_7906-Sandbox.tar.bz2 and it is composed of the following:

  • the WMCore Utils sub-package
  • the WMCore PSetTweaks sub-package
  • the actual workflow WMSandbox, which contains the workflow specific construction, including: WMWorkload.pkl, each task PSet.py configuration, the pileup_config.json if needed.
  • lastly, it also contains further WMCore libraries that are zipped and named as WMCore.zip. This package in unzipped and added to the PYTHONPATH. Currently, this file contains the whole WMCore sub-package, making it challenging to remove the python future dependency.

As of now, the average size of this job sandbox is of 3.1MB, but it largely depends on the pileup dataset being used in the workflow, which could add some 10s of MBs to the sandbox.

The main job executable bash script

The WM job wrapper script is called submit_py3.sh and it provides a discovery of the job required OS and CPU architecture. In case any OS can be used, this script defaults to rhel7.

This wrapper script also provides a functionality to auto-discover the latest python3 version available in the CVMFS area. The usual version used is:

/cvmfs/cms.cern.ch/COMP/rhel7_x86_64/external/python3/3.8.2-comp/bin/python3

WMAgent supports all of the CMSSW ScramArchs, but many of them are in a best effort mode because we do not test the runtime code in software stack different than the COMP one (RHEL7, Python 3.8.2, GCC 6.3.0).

Note that our central production jobs are actually executed inside a Singularity container, maximizing resource utilization and being independent of the underlying host operating system. Example:

amaltaro@lxplus796:~ $ cmssw-cc8
bash: openstack: command not found
Singularity> source /cvmfs/cms.cern.ch/COMP/rhel7_x86_64/external/python3/3.8.2-comp/etc/profile.d/init.sh 
Singularity> /cvmfs/cms.cern.ch/COMP/rhel7_x86_64/external/python3/3.8.2-comp/bin/python3
Python 3.8.2 (default, Jan 22 2021, 17:57:37) 
[GCC 6.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> platform.platform()
'Linux-3.10.0-1160.92.1.el7.x86_64-x86_64-with-glibc2.2.5'

In this script, the job object is unpacked via the Unpacker.py runtime script (running with that python version). This creates the job directory, unpacks the sandbox and setups the environment such that the job can be called using the Startup runtime script, which will also use the same python version. E.g.:

/cvmfs/cms.cern.ch/COMP/rhel7_x86_64/external/python3/3.8.2-comp/bin/python3 Startup.py

The CMSSW executor prescripts

Startup.py above bootstraps the job and executes it. The execute method will look and use the proper executor to run here (e.g.: the CMSSW executor)

For a CMSSW executor, prescripts are added in pre and looked doExecution, being SetupCMSSWPset the only one added by this method at present.

The SetupCMSSWPset.py runtime script is called via another script ScriptInvoke. The Scram call is executed with the COMP python version and the job is configured according to the initial bash script wrapper, as shown in this Scram.call block.

The python version coming from Scram.py is stored in sys.executable and called HERE

The command invoked looks like this:

/cvmfs/cms.cern.ch/slc7_amd64_gcc700/cms/cmssw-patch/CMSSW_10_6_11_patch1/external/slc7_amd64_gcc700/bin/python2.7 -m WMCore.WMRuntime.ScriptInvoke WMTaskSpace.cmsRun1 SetupCMSSWPset

cmsRun executable

To run the actual payload, a bash script with some arguments is generated and executed. E.g.:

INFO:root:Executing CMSSW. args: ['/bin/bash', '/tmpscratch/users/khurtado/work/job/WMTaskSpace/cmsRun1/cmsRun1-main.sh', '', u'slc7_amd64_gcc700', 'scramv1', 'CMSSW', 'CMSSW_10_6_11_patch1', 'FrameworkJobReport.xml', 'cmsRun', 'PSet.py', '', '', '']

Here, the arguments passed represent the following:

#  cmsRun arguments
 SCRAM_SETUP=$1
SCRAM_ARCHIT=$2
SCRAM_COMMAND=$3
SCRAM_PROJECT=$4
CMSSW_VERSION=$5
JOB_REPORT=$6
EXECUTABLE=$7
CONFIGURATION=$8
USER_TARBALL=$9

Where EXECUTABLE is usually: cmsRun, which will run on python2. To run on python3, cmsRunPython3 can be called instead (note this will depend on whether the CMSSW framework release has that available or not)

The following PR shows where to replace that: https://github.com/dmwm/WMCore/pull/9599/files

Questions: How do we verify cmsRunPyhon3 is available? Can any Pset run on cmsRun or cmsRunPython3 (when available)?

Clone this wiki locally