New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update fragment for MadGraph Workflow at SL7 #25116
Conversation
The code-checks are being triggered in jenkins. |
the new gridpack for workflow 562 has been copied to eos but not yet synced to cvmfs, needs to wait for a bit to start the test /cvmfs/cms.cern.ch/phys_generator/gridpacks/2017/13TeV/madgraph/V5_2.4.2/exo_diboson/Spin_2/BkGraviton_ZZ_inclu_narrow_M1200_slc6_amd64_gcc481_CMSSW_7_1_30_gcc700-10-3-0-Syscalc_tarball.tar.xz |
+code-checks Logs: https://cmssdt.cern.ch/SDT/code-checks/cms-sw-PR-25116/7110 |
A new Pull Request was created by @qliphy (Qiang Li) for master. It involves the following packages: Configuration/Generator @alberto-sanchez, @cmsbuild, @qliphy, @perrozzi, @efeyazgan can you please review it and eventually sign? Thanks. cms-bot commands are listed here |
please test workflow 512,513,562 |
The tests are being triggered in jenkins. |
-1 Tested at: 10ab395 The following merge commits were also included on top of IB + this PR after doing git cms-merge-topic: You can see the results of the tests here: I found follow errors while testing this PR Failed tests: RelVals
When I ran the RelVals I found an error in the following worklfows: runTheMatrix-results/562.0_BulkG_ZZ_2L2Q_M1200_narrow_13TeV_pythia8+BulkG_M1200_narrow_2L2Q_LHE_13TeV+Hadronizer_TuneCUETP8M1_Mad_pythia8+HARVESTGEN2/step1_BulkG_ZZ_2L2Q_M1200_narrow_13TeV_pythia8+BulkG_M1200_narrow_2L2Q_LHE_13TeV+Hadronizer_TuneCUETP8M1_Mad_pythia8+HARVESTGEN2.log The following merge commits were also included on top of IB + this PR after doing git cms-merge-topic: |
Comparison not run due to runTheMatrix errors (RelVals and Igprof tests were also skipped) |
Workflows 512, 513 are ok. 562 has problem with gcc. My local test at lxplus7.cern.ch indeed worked quite well. However, at lxplus.cern.ch I could reproduce the error. Maybe the checks were done at lxplus.. Anyway I may need to recompile the gridpack in 562 and make sure the local test works. |
Pull request #25116 was updated. @alberto-sanchez, @cmsbuild, @qliphy, @perrozzi, @efeyazgan can you please check and sign again. |
please test workflow 512,513,562 |
The tests are being triggered in jenkins. |
Comparison job queued. |
Comparison is ready @slava77 comparisons for the following workflows were not done due to missing matrix map:
Comparison Summary:
|
@qliphy thank you, I understand that this is a temporary workaround waiting to move the whole library to the newest Madgraph version. Anyway I think it is good to have, but the failing workflows are definitely more than 3: Are you planning to fix all fo them? Or do you already have new gridpacks close to be ready? |
@fabiocos Indeed 515, 518, 522, 526, 529 share the same fragment as 512; For several others, like 551, as mentioned before, local test works well, the problem here seems to be timeout. |
For more details: from you can find workflow 512 and 515 share the same LHE fragment "DYToll01234Jets_5f_LO_MLM_Madgraph_LHE_13TeV" workflows[512]=['DYTollJets_LO_Mad_13TeV_py8',['DYToll01234Jets_5f_LO_MLM_Madgraph_LHE_13TeV','Hadronizer_TuneCP5_13TeV_MLM_5f_max4j_LHE_pythia8','HARVESTGEN2']] workflows[515]=['DYTollJets_LO_Mad_13TeV_py8_taupinu',['DYToll01234Jets_5f_LO_MLM_Madgraph_LHE_13TeV','Hadronizer_TuneCP5_13TeV_MLM_5f_max4j_LHE_pythia8_taupinu','HARVESTGEN2']] |
@qliphy thank you, are you planning further developments for this PR, or are you ready to sign it for integration? |
+1 |
This pull request is fully signed and it will be integrated in one of the next master IBs (tests are also fine). This pull request will now be reviewed by the release team before it's merged. @davidlange6, @slava77, @smuzaffar, @fabiocos (and backports should be raised in the release meeting by the corresponding L2) |
+1 |
@qliphy the problem with wf 562 does not really seem solved, please see |
@fabiocos It works well with slc7_amd64_gcc700. The problem appears under slc6_amd64_gcc700. It seems only updating SysCalc doesn't work (although I don't know why it worked and passed the check yesterday). I have now regenerated the gridpack from scratch and updated it in cvmfs. It should work now, at least local test succeeds with both sl7 and sl6. |
@qliphy we should ensure that it is the CMSSW environment to provide the needed libraries, and not just depend on the occasional differences between installations on one or another machine. |
Set SCRAM_ARCH and Release to 'slc6_amd64_gcc530','CMSSW_8_4_0' for several MG workflows (512, 513)
For 562.0 (BulkG_ZZ_2L2Q_M1200_narrow_13TeV_pythia8), it is a bit more complicated, as the original gridpack was made with 7_1_30 and lhapdf 6.2.1. While at 8_X, lhapdf is 6.1.6 which doesn't contain several 4f PDF. Thus I recompile the SysCalc inside the gridpack and repack it, with the details as following, which I think can also be used for other old gridpacks to make them work under SL7:
set an SL6 environment (930 and amd64_gcc630) to recompile SysCalc
LHAPDFCONFIG=
echo "$LHAPDF_DATA_PATH/../../bin/lhapdf-config"
PATH=
${LHAPDFCONFIG} --prefix
/bin:${PATH} makeuntar gridpack
replace
mgbasedir/SysCalc/sys_calc
Option: UPDATE cmssw_version and scram_arch_version in runcmsgrid.sh to CMSSW_9_3_0 and slc6_amd64_gcc630
XZ_OPT="--lzma2=preset=9,dict=512MiB" tar -cJpsf YOURS.tar.xz mgbasedir process runcmsgrid.sh gridpack_generation.log InputCards