New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Workflow for multi-run harvesting #24920
Conversation
... and allow using it as a customisation.
The code-checks are being triggered in jenkins. |
+code-checks Logs: https://cmssdt.cern.ch/SDT/code-checks/cms-sw-PR-24920/6893 |
A new Pull Request was created by @schneiml (Marcel Schneider) for master. It involves the following packages: Configuration/PyReleaseValidation @cmsbuild, @prebello, @zhenhu, @kpedro88, @pgunnell, @franzoni, @fabiocos, @davidlange6 can you please review it and eventually sign? Thanks. cms-bot commands are listed here |
please test workflow 137.8 |
The tests are being triggered in jenkins. |
FYI @mtosi |
thanks ! |
-1 Tested at: cdfcd92 You can see the results of the tests here: I found follow errors while testing this PR Failed tests: RelVals
When I ran the RelVals I found an error in the following worklfows: runTheMatrix-results/137.8_RunEGamma2018C+RunEGamma2018C+HLTDR2_2018+RECODR2_2018reHLT_skimEGamma_Prompt_L1TEgDQM+RunEGamma2018D+HLTDR2_2018+RECODR2_2018reHLT_skimEGamma_Prompt_L1TEgDQM+HARVEST2018_L1TEgDQM_MULTIRUN/step7_RunEGamma2018C+RunEGamma2018C+HLTDR2_2018+RECODR2_2018reHLT_skimEGamma_Prompt_L1TEgDQM+RunEGamma2018D+HLTDR2_2018+RECODR2_2018reHLT_skimEGamma_Prompt_L1TEgDQM+HARVEST2018_L1TEgDQM_MULTIRUN.log |
Comparison not run due to runTheMatrix errors (RelVals and Igprof tests were also skipped) |
Comparison job queued. |
Comparison is ready @slava77 comparisons for the following workflows were not done due to missing matrix map:
Comparison Summary:
|
+upgrade |
@@ -15,6 +15,11 @@ | |||
|
|||
dqmSaver.saveByRun = -1 | |||
dqmSaver.saveAtJobEnd = True | |||
dqmSaver.forceRunNumber = 1 | |||
dqmSaver.forceRunNumber = 999999 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@schneiml could you please explain the reason for this and its impacts as it is a global change, not just for the new workflow? Is this basically a dummy parameter affecting just the output file name in https://cmssdt.cern.ch/lxr/source/DQMServices/Components/src/DQMFileSaver.cc#0097
and the internal location of saving the plot in dbe:save?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK this file is not used anywhere in CMSSW, except for the new wf. This is the "reference" multi run harvesting config, and as such it is wrong; the number has to be 999999 so the output follows the conventions for DQMGUI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fabiocos It turns out it is used, here:
cmssw/Configuration/Applications/python/ConfigBuilder.py
Lines 1971 to 1972 in 5e0d560
self.DQMSaverCFF='Configuration/StandardSequences/DQMSaver'+self._options.harvesting+'_cff' | |
self.loadAndRemember(self.DQMSaverCFF) |
I was not aware of that, and it breaks the pre2 relval now.
Just removing the --harvesting AtJobEnd
option in wf 503 et.al. seems to not work either, then we get run number 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@schneiml thanks. did you test these changes in DQM, I mean the use of multirun harvesting, locally? do you think it is the reason of GEN relvals fail? why only GEN then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only the GEN workflows use the --harvesting AtEndJob
option. On the conceptual level, I don't see why they do that, since there should only be one run there.
In practise, I checked what happens if I make them use --harvesting AtEndRun
as most of the other WFs do, but this does not work, since apparently the internal CMSSW run number in these jobs is 0, and the AtEndJob
was required to force it to 1.
I think the proper solution would be to modify some part of the configuration in some way (many ways are possible) to set dqmSaver.forceRunNumber = 1
for the GEN jobs, and remove the --harvesting AtEndJob
. Note that this might change the behaviour of the DQM in these jobs, but probably for the better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@prebello Sang Hyun Ko reported on JIRA that it works for him to just remove the --harvesting AtEndJob
. Not sure what happend in my case, I can't reproduce the run 0 file now...
So, we should just remove the --harvesting AtEndJob
(https://github.com/cms-sw/cmssw/blob/master/Configuration/PyReleaseValidation/python/relval_steps.py#L2320) and everything should be good.
+1 |
+operations the change to StandardSequences should be ineffective except for the new test |
+1 |
This pull request is fully signed and it will be integrated in one of the next master IBs (tests are also fine). This pull request will be automatically merged. |
This PR adds a workflow that exercises multi-run HARVESTING on DQMIO files as number 137.8.
It is based on the 136 data relvals and mixes 2018C and D data; I think this is not guaranteed to work, but it saves us from adding a second run to the relval data. Currently, I only added it for one PD and 2018C/D data, that should be enough to catch the obvious problems. We could do the full combinatoric expansion as for the 136 WFs as well.
As expected, this WF fails at the moment, as far as I see for a different issue than the ones that @cerminar observed. PRs to fix the crashes will follow. We can wait with integrating this until all plugins are fixed for multi-run harvesting.