Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removing hard-coded eta dependency from PFEnergyCalibration.cc #25883

Conversation

spandeyehep
Copy link
Contributor

This PR removes the hard-coded eta dependency from PFEnergyCalibration.cc code [which is used for PF hadron calibration].
All the hard-coded eta dependencies will be taken care of by the PFCalibration payload, making the future PF calibration updates independent of CMSSW code.
It involves following packages:
RecoParticleflow/PFClusterTools/
CondFormats/PhysicsToolsObjects/

More details can be found out in the following slides:
https://bkansal.web.cern.ch/bkansal/PFCalibration/hardcoded_dependencies_JME.pdf

We have done the validation locally and we don't expect/see any changes at the reconstruction level from this PR.
For now, matrix level tests are likely to fail because of un-availability of compatible PFCalibration payload in the current GT.
We are in contact with AlCa/DB conveners in order to proceed with the new payload integration in the GT in parallel to this PR.

Adding @ahinzmann , @zdemirag

@slava77
Copy link
Contributor

slava77 commented Feb 7, 2019

cmsbuild commented an hour ago
The code-checks are being triggered in jenkins.

@smuzaffar @mrodozov
please check what is happening with the bot/jenkins.
It looks like the response is much slower than expected

@cms-sw cms-sw deleted a comment from cmsbuild Feb 7, 2019
@cms-sw cms-sw deleted a comment from cmsbuild Feb 7, 2019
@cmsbuild
Copy link
Contributor

cmsbuild commented Feb 7, 2019

The code-checks are being triggered in jenkins.

@cmsbuild
Copy link
Contributor

cmsbuild commented Feb 7, 2019

+code-checks

Logs: https://cmssdt.cern.ch/SDT/code-checks/cms-sw-PR-25883/8348

  • This PR adds an extra 32KB to repository

@cmsbuild
Copy link
Contributor

cmsbuild commented Feb 7, 2019

A new Pull Request was created by @spandeyehep (Shubham Pandey) for master.

It involves the following packages:

CondFormats/PhysicsToolsObjects
RecoParticleFlow/PFClusterTools

@perrotta, @tocheng, @cmsbuild, @franzoni, @slava77, @ggovi, @pohsun can you please review it and eventually sign? Thanks.
@mmarionncern, @cbernet, @tocheng, @lgray, @seemasharmafnal, @hatakeyamak, @mmusich, @bachtis this is something you requested to watch as well.
@davidlange6, @slava77, @fabiocos you are the release manager for this.

cms-bot commands are listed here

@fabiocos
Copy link
Contributor

fabiocos commented Feb 8, 2019

please test

@cmsbuild
Copy link
Contributor

cmsbuild commented Feb 8, 2019

The tests are being triggered in jenkins.
https://cmssdt.cern.ch/jenkins/job/ib-any-integration/33047/console Started: 2019/02/08 10:47

@cmsbuild
Copy link
Contributor

cmsbuild commented Feb 8, 2019

-1

Tested at: ff23e26

You can see the results of the tests here:
https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-25883/33047/summary.html

I found follow errors while testing this PR

Failed tests: RelVals AddOn

  • RelVals:

The relvals timed out after 4 hours.
When I ran the RelVals I found an error in the following workflows:
5.1 step1

runTheMatrix-results/5.1_TTbar+TTbarFS+HARVESTFS/step1_TTbar+TTbarFS+HARVESTFS.log

4.53 step3
runTheMatrix-results/4.53_RunPhoton2012B+RunPhoton2012B+HLTD+RECODR1reHLT+HARVESTDR1reHLT/step3_RunPhoton2012B+RunPhoton2012B+HLTD+RECODR1reHLT+HARVESTDR1reHLT.log

7.3 step2
runTheMatrix-results/7.3_CosmicsSPLoose_UP18+CosmicsSPLoose_UP18+DIGICOS_UP18+RECOCOS_UP18+ALCACOS_UP18+HARVESTCOS_UP18/step2_CosmicsSPLoose_UP18+CosmicsSPLoose_UP18+DIGICOS_UP18+RECOCOS_UP18+ALCACOS_UP18+HARVESTCOS_UP18.log

135.4 step1
runTheMatrix-results/135.4_ZEE_13+ZEEFS_13+HARVESTUP15FS+MINIAODMCUP15FS/step1_ZEE_13+ZEEFS_13+HARVESTUP15FS+MINIAODMCUP15FS.log

9.0 step3
runTheMatrix-results/9.0_Higgs200ChargedTaus+Higgs200ChargedTaus+DIGI+RECO+HARVEST/step3_Higgs200ChargedTaus+Higgs200ChargedTaus+DIGI+RECO+HARVEST.log

136.788 step3
runTheMatrix-results/136.788_RunSinglePh2017B+RunSinglePh2017B+HLTDR2_2017+RECODR2_2017reHLT_skimSinglePh_Prompt+HARVEST2017/step3_RunSinglePh2017B+RunSinglePh2017B+HLTDR2_2017+RECODR2_2017reHLT_skimSinglePh_Prompt+HARVEST2017.log

25.0 step3
runTheMatrix-results/25.0_TTbar+TTbar+DIGI+RECOAlCaCalo+HARVEST+ALCATT/step3_TTbar+TTbar+DIGI+RECOAlCaCalo+HARVEST+ALCATT.log

136.85 step2
runTheMatrix-results/136.85_RunEGamma2018A+RunEGamma2018A+HLTDR2_2018+RECODR2_2018reHLT_skimEGamma_Prompt_L1TEgDQM+HARVEST2018_L1TEgDQM/step2_RunEGamma2018A+RunEGamma2018A+HLTDR2_2018+RECODR2_2018reHLT_skimEGamma_Prompt_L1TEgDQM+HARVEST2018_L1TEgDQM.log

140.53 step2
runTheMatrix-results/140.53_RunHI2011+RunHI2011+RECOHID11+HARVESTDHI/step2_RunHI2011+RunHI2011+RECOHID11+HARVESTDHI.log

140.56 step2
runTheMatrix-results/140.56_RunHI2018+RunHI2018+RECOHID18+HARVESTDHI18/step2_RunHI2018+RunHI2018+RECOHID18+HARVESTDHI18.log

1306.0 step3
runTheMatrix-results/1306.0_SingleMuPt1_UP15+SingleMuPt1_UP15+DIGIUP15+RECOUP15+HARVESTUP15/step3_SingleMuPt1_UP15+SingleMuPt1_UP15+DIGIUP15+RECOUP15+HARVESTUP15.log

1330.0 step3
runTheMatrix-results/1330.0_ZMM_13+ZMM_13+DIGIUP15+RECOUP15_L1TMuDQM+HARVESTUP15_L1TMuDQM/step3_ZMM_13+ZMM_13+DIGIUP15+RECOUP15_L1TMuDQM+HARVESTUP15_L1TMuDQM.log

158.0 step3
runTheMatrix-results/158.0_HydjetQ_B12_5020GeV_2018_ppReco+HydjetQ_B12_5020GeV_2018_ppReco+DIGIHI2018PPRECO+RECOHI2018PPRECO+ALCARECOHI2018PPRECO+HARVESTHI2018PPRECO/step3_HydjetQ_B12_5020GeV_2018_ppReco+HydjetQ_B12_5020GeV_2018_ppReco+DIGIHI2018PPRECO+RECOHI2018PPRECO+ALCARECOHI2018PPRECO+HARVESTHI2018PPRECO.log

1000.0 step2
runTheMatrix-results/1000.0_RunMinBias2011A+RunMinBias2011A+TIER0+SKIMD+HARVESTDfst2+ALCASPLIT/step2_RunMinBias2011A+RunMinBias2011A+TIER0+SKIMD+HARVESTDfst2+ALCASPLIT.log

1001.0 step2
runTheMatrix-results/1001.0_RunMinBias2011A+RunMinBias2011A+TIER0EXP+ALCAEXP+ALCAHARVDSIPIXELCALRUN1+ALCAHARVD1+ALCAHARVD2+ALCAHARVD3+ALCAHARVD4+ALCAHARVD5/step2_RunMinBias2011A+RunMinBias2011A+TIER0EXP+ALCAEXP+ALCAHARVDSIPIXELCALRUN1+ALCAHARVD1+ALCAHARVD2+ALCAHARVD3+ALCAHARVD4+ALCAHARVD5.log

10042.0 step3
runTheMatrix-results/10042.0_ZMM_13+ZMM_13TeV_TuneCUETP8M1_2017_GenSimFull+DigiFull_2017+RecoFull_2017+HARVESTFull_2017+ALCAFull_2017/step3_ZMM_13+ZMM_13TeV_TuneCUETP8M1_2017_GenSimFull+DigiFull_2017+RecoFull_2017+HARVESTFull_2017+ALCAFull_2017.log

10824.0 step2
runTheMatrix-results/10824.0_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2018_GenSimFull+DigiFull_2018+RecoFull_2018+HARVESTFull_2018+ALCAFull_2018+NanoFull_2018/step2_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2018_GenSimFull+DigiFull_2018+RecoFull_2018+HARVESTFull_2018+ALCAFull_2018+NanoFull_2018.log

10024.0 step3
runTheMatrix-results/10024.0_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2017_GenSimFull+DigiFull_2017+RecoFull_2017+HARVESTFull_2017+ALCAFull_2017/step3_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2017_GenSimFull+DigiFull_2017+RecoFull_2017+HARVESTFull_2017+ALCAFull_2017.log

10224.0 step3
runTheMatrix-results/10224.0_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2017PU_GenSimFull+DigiFullPU_2017PU+RecoFullPU_2017PU+HARVESTFullPU_2017PU/step3_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2017PU_GenSimFull+DigiFullPU_2017PU+RecoFullPU_2017PU+HARVESTFullPU_2017PU.log

11624.0 step3
runTheMatrix-results/11624.0_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2019_GenSimFull+DigiFull_2019+RecoFull_2019+HARVESTFull_2019+ALCAFull_2019/step3_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2019_GenSimFull+DigiFull_2019+RecoFull_2019+HARVESTFull_2019+ALCAFull_2019.log

20034.0 step3
runTheMatrix-results/20034.0_TTbar_14TeV+TTbar_14TeV_TuneCUETP8M1_2023D17_GenSimHLBeamSpotFull14+DigiFullTrigger_2023D17+RecoFullGlobal_2023D17+HARVESTFullGlobal_2023D17/step3_TTbar_14TeV+TTbar_14TeV_TuneCUETP8M1_2023D17_GenSimHLBeamSpotFull14+DigiFullTrigger_2023D17+RecoFullGlobal_2023D17+HARVESTFullGlobal_2023D17.log

21234.0 step3
runTheMatrix-results/21234.0_TTbar_14TeV+TTbar_14TeV_TuneCUETP8M1_2023D21_GenSimHLBeamSpotFull14+DigiFullTrigger_2023D21+RecoFullGlobal_2023D21+HARVESTFullGlobal_2023D21/step3_TTbar_14TeV+TTbar_14TeV_TuneCUETP8M1_2023D21_GenSimHLBeamSpotFull14+DigiFullTrigger_2023D21+RecoFullGlobal_2023D21+HARVESTFullGlobal_2023D21.log

250202.181 step3
runTheMatrix-results/250202.181_TTbar_13UP18+TTbar_13UP18+PREMIXUP18_PU25+DIGIPRMXLO/bin/sh:/step3_TTbar_14TeV+TTbar_14TeV_TuneCUETP8M1_2023D35_GenSimHLBeamSpotFull14+DigiFullTrigger_2023D35+RecoFullGlobal_2023D35+HARVESTFullGlobal_2023D35.log

  • AddOn:

I found errors in the following addon tests:

cmsDriver.py TTbar_8TeV_TuneCUETP8M1_cfi --conditions auto:run1_mc --fast -n 100 --eventcontent AODSIM,DQM --relval 100000,1000 -s GEN,SIM,RECOBEFMIX,DIGI:pdigi_valid,L1,DIGI2RAW,L1Reco,RECO,EI,VALIDATION --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --datatier GEN-SIM-DIGI-RECO,DQMIO --beamspot Realistic8TeVCollision : FAILED - time: date Fri Feb 8 15:05:06 2019-date Fri Feb 8 15:00:22 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:Fake2,RAW2DIGI,L1Reco,RECO --data --scenario=pp -n 10 --conditions auto:run2_data_Fake2 --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --era Run2_2016 --processName=HLTRECO --filein file:RelVal_Raw_Fake2_DATA.root --fileout file:RelVal_Raw_Fake2_DATA_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:06:15 2019-date Fri Feb 8 15:00:25 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:PRef,RAW2DIGI,L1Reco,RECO --data --scenario=pp -n 10 --conditions auto:run2_data_PRef --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --era Run2_2018 --processName=HLTRECO --filein file:RelVal_Raw_PRef_DATA.root --fileout file:RelVal_Raw_PRef_DATA_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:07:11 2019-date Fri Feb 8 15:00:27 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:HIon,RAW2DIGI,L1Reco,RECO --data --scenario=pp -n 10 --conditions auto:run2_data_HIon --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --era Run2_2018_pp_on_AA --processName=HLTRECO --filein file:RelVal_Raw_HIon_DATA.root --fileout file:RelVal_Raw_HIon_DATA_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:08:28 2019-date Fri Feb 8 15:00:28 2019 s - exit: 34304
cmsRun /cvmfs/cms-ib.cern.ch/week0/slc7_amd64_gcc700/cms/cmssw-patch/CMSSW_10_5_X_2019-02-07-2300/src/HLTrigger/Configuration/test/OnLine_HLT_GRun.py realData=False globalTag=@ inputFiles=@ : FAILED - time: date Fri Feb 8 15:13:42 2019-date Fri Feb 8 15:00:32 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:GRun,RAW2DIGI,L1Reco,RECO --mc --scenario=pp -n 10 --conditions auto:run2_mc_GRun --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --era Run2_2018 --processName=HLTRECO --filein file:RelVal_Raw_GRun_MC.root --fileout file:RelVal_Raw_GRun_MC_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:13:42 2019-date Fri Feb 8 15:00:32 2019 s - exit: 34304
cmsDriver.py TTbar_13TeV_TuneCUETP8M1_cfi --conditions auto:run2_mc_l1stage1 --fast -n 100 --eventcontent AODSIM,DQM --relval 100000,1000 -s GEN,SIM,RECOBEFMIX,DIGI:pdigi_valid,L1,DIGI2RAW,L1Reco,RECO,EI,VALIDATION --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --datatier GEN-SIM-DIGI-RECO,DQMIO --beamspot NominalCollision2015 --era Run2_25ns : FAILED - time: date Fri Feb 8 15:05:06 2019-date Fri Feb 8 15:00:35 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:PRef,RAW2DIGI,L1Reco,RECO --mc --scenario=pp -n 10 --conditions auto:run2_mc_PRef --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --era Run2_2018 --processName=HLTRECO --filein file:RelVal_Raw_PRef_MC.root --fileout file:RelVal_Raw_PRef_MC_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:09:22 2019-date Fri Feb 8 15:00:37 2019 s - exit: 34304
cmsDriver.py TTbar_13TeV_TuneCUETP8M1_cfi --conditions auto:run2_mc --fast -n 100 --eventcontent AODSIM,DQM --relval 100000,1000 -s GEN,SIM,RECOBEFMIX,DIGI:pdigi_valid,L1,DIGI2RAW,L1Reco,RECO,EI,VALIDATION --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --datatier GEN-SIM-DIGI-RECO,DQMIO --beamspot NominalCollision2015 --era Run2_2016 : FAILED - time: date Fri Feb 8 15:06:41 2019-date Fri Feb 8 15:01:31 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:PIon,RAW2DIGI,L1Reco,RECO --data --scenario=pp -n 10 --conditions auto:run2_data_PIon --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --era Run2_2018 --processName=HLTRECO --filein file:RelVal_Raw_PIon_DATA.root --fileout file:RelVal_Raw_PIon_DATA_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:10:59 2019-date Fri Feb 8 15:05:16 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:Fake,RAW2DIGI,L1Reco,RECO --mc --scenario=pp -n 10 --conditions auto:run1_mc_Fake --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --processName=HLTRECO --filein file:RelVal_Raw_Fake_MC.root --fileout file:RelVal_Raw_Fake_MC_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:18:27 2019-date Fri Feb 8 15:05:19 2019 s - exit: 34304
cmsRun /cvmfs/cms-ib.cern.ch/week0/slc7_amd64_gcc700/cms/cmssw-patch/CMSSW_10_5_X_2019-02-07-2300/src/HLTrigger/Configuration/test/OnLine_HLT_HIon.py realData=False globalTag=@ inputFiles=@ : FAILED - time: date Fri Feb 8 15:15:12 2019-date Fri Feb 8 15:06:23 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:HIon,RAW2DIGI,L1Reco,RECO --mc --scenario=pp -n 10 --conditions auto:run2_mc_HIon --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --era Run2_2018_pp_on_AA --processName=HLTRECO --filein file:RelVal_Raw_HIon_MC.root --fileout file:RelVal_Raw_HIon_MC_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:15:12 2019-date Fri Feb 8 15:06:23 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:Fake2,RAW2DIGI,L1Reco,RECO --mc --scenario=pp -n 10 --conditions auto:run2_mc_Fake2 --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --era Run2_2016 --processName=HLTRECO --filein file:RelVal_Raw_Fake2_MC.root --fileout file:RelVal_Raw_Fake2_MC_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:18:18 2019-date Fri Feb 8 15:06:44 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:Fake1,RAW2DIGI,L1Reco,RECO --mc --scenario=pp -n 10 --conditions auto:run2_mc_Fake1 --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --era Run2_25ns --processName=HLTRECO --filein file:RelVal_Raw_Fake1_MC.root --fileout file:RelVal_Raw_Fake1_MC_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:16:18 2019-date Fri Feb 8 15:07:15 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:Fake,RAW2DIGI,L1Reco,RECO --data --scenario=pp -n 10 --conditions auto:run1_data_Fake --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --processName=HLTRECO --filein file:RelVal_Raw_Fake_DATA.root --fileout file:RelVal_Raw_Fake_DATA_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:13:47 2019-date Fri Feb 8 15:09:30 2019 s - exit: 34304
cmsRun /cvmfs/cms-ib.cern.ch/week0/slc7_amd64_gcc700/cms/cmssw-patch/CMSSW_10_5_X_2019-02-07-2300/src/HLTrigger/Configuration/test/OnLine_HLT_GRun.py realData=True globalTag=@ inputFiles=@ : FAILED - time: date Fri Feb 8 15:18:10 2019-date Fri Feb 8 15:09:41 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:GRun,RAW2DIGI,L1Reco,RECO --data --scenario=pp -n 10 --conditions auto:run2_data_GRun --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --era Run2_2018 --processName=HLTRECO --filein file:RelVal_Raw_GRun_DATA.root --fileout file:RelVal_Raw_GRun_DATA_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:18:10 2019-date Fri Feb 8 15:09:41 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:Fake1,RAW2DIGI,L1Reco,RECO --data --scenario=pp -n 10 --conditions auto:run2_data_Fake1 --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --era Run2_25ns --processName=HLTRECO --filein file:RelVal_Raw_Fake1_DATA.root --fileout file:RelVal_Raw_Fake1_DATA_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:19:47 2019-date Fri Feb 8 15:11:04 2019 s - exit: 34304
cmsDriver.py RelVal -s HLT:PIon,RAW2DIGI,L1Reco,RECO --mc --scenario=pp -n 10 --conditions auto:run2_mc_PIon --relval 9000,50 --datatier "RAW-HLT-RECO" --eventcontent FEVTDEBUGHLT --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --era Run2_2018 --processName=HLTRECO --filein file:RelVal_Raw_PIon_MC.root --fileout file:RelVal_Raw_PIon_MC_HLT_RECO.root : FAILED - time: date Fri Feb 8 15:19:57 2019-date Fri Feb 8 15:13:50 2019 s - exit: 34304

@cmsbuild
Copy link
Contributor

cmsbuild commented Feb 8, 2019

Comparison not run due to runTheMatrix errors (RelVals and Igprof tests were also skipped)

@perrotta
Copy link
Contributor

What is the status of the needed GT update?
Is it included in the submitted PR 25929, by chance?

@tocheng
Copy link
Contributor

tocheng commented Feb 15, 2019

What is the status of the needed GT update?
Is it included in the submitted PR 25929, by chance?

Hello @perrotta, the PR needs updated PF calibration to pass the test which is not in 25929.
My understanding is that the PR may needs some further update to have the internal validation succeed. AlCa is following on this, and will give suggestions soon.

@tocheng
Copy link
Contributor

tocheng commented Feb 17, 2019

@spandeyehep I also mentioned this in the e-mail.
For PFcalibration payload production, one can't set different limit for different formulas.
One can only set different limits for different variable types.

So if you want to have two kind of limits, one starting from 0 the other starting from 1,
you need to set two variable types in the ProducePFCalibrationObject (into vector), and then set two limits (into a vector), one is JetAbsEta the other is JetEt. And when deploying the PF calibration, you need to set the correct variable type. For e.g.,

point.insert(BinningVariables::JetEt, x);
,
you should use BinningVariables::JetAbsEta

@cmsbuild
Copy link
Contributor

@cmsbuild
Copy link
Contributor

Comparison job queued.

@cmsbuild
Copy link
Contributor

Comparison is ready
https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-25883/34398/summary.html

Comparison Summary:

  • No significant changes to the logs found
  • Reco comparison results: 109 differences found in the comparisons
  • DQMHistoTests: Total files compared: 33
  • DQMHistoTests: Total histograms compared: 3211964
  • DQMHistoTests: Total failures: 63
  • DQMHistoTests: Total nulls: 0
  • DQMHistoTests: Total successes: 3211697
  • DQMHistoTests: Total skipped: 204
  • DQMHistoTests: Total Missing objects: 0
  • DQMHistoSizes: Histogram memory added: 0.0 KiB( 32 files compared)
  • Checked 137 log files, 14 edm output root files, 33 DQM output files

@ggovi
Copy link
Contributor

ggovi commented Apr 30, 2019

@spandeyehep
the addition of items in the above enum might be not backward compatible on the existing payloads in the db
Did you test that you can correctly read them?

@bkansal
Copy link

bkansal commented Apr 30, 2019

Hi @ggovi,

If we use new PFCalibration payload (i.e. PFCalibration_v10_mc) which we have already provided (and has been integrated to the current GT) then we are able to read or in other words it is compatible with the new code.
We have tested it locally, please see [1].

However, if it is old payload (e.g. PFCalibration_v9_mc in 105X_upgrade2018_realistic_v1 [2]), then our current code will not be backward compatible because of new items in enum and new functions.
This PR was opened in order to make future PFCalibrations independent of code changes.

Let me know if it answers your question.

Thanks.

[1] #25883 (comment)
[2] https://cms-conddb.cern.ch/cmsDbBrowser/list/Prod/gts/105X_upgrade2018_realistic_v1

@ggovi
Copy link
Contributor

ggovi commented Apr 30, 2019

@spandeyehep
the required compatibility includes:

  • the new code ( the one of this PR ) to be capable to read correctly ALL of the existing concerned payloads in the DB
  • the old code ( any earlier release ) to be capable to read correctly the new payloads produced with this PR code
    Can you please clarify if both compatibilities have been validated?

@bkansal
Copy link

bkansal commented May 1, 2019

Hello @ggovi

This PR is aimed to remove hard coded part in the code and introducing new functions to be provided via payload.

the new code ( the one of this PR ) to be capable to read correctly ALL of the existing concerned payloads in the DB

There are new functions expected in payloads which are not present in old payloads corresponding to previous releases (e.g. PFCalibration_v9_mc in 105X_upgrade2018_realistic_v1 [2]). So yes, there is incompatibility.

the old code ( any earlier release ) to be capable to read correctly the new payloads produced with this PR code

Technically yes. The old code can read the functions in new payloads which were there in previous payloads. The hardcoded part may be outdated depending on which year it is being referred to. We have tested it locally [1].

[1]#25883 (comment)
[2]https://cms-conddb.cern.ch/cmsDbBrowser/list/Prod/gts/105X_upgrade2018_realistic_v1

@ggovi
Copy link
Contributor

ggovi commented May 2, 2019

@spandeyehep
Please note that functions are not persistent members of the class, so adding functions has no impact whatsoever.

  • Concerning point 1 of the compatibility requirements. Assuming that the de-serialisation works ( please clarify if you have tested this aspect ), the changes should be made in such a way that the new functions are returning some well recognisable value when called in the case of old-version payload. In this case, the classes can be considered compatibles.
  • Concerning point 2. The behaviour of the old class while reading the new payloads should be unaltered. Please clarify.

@bkansal
Copy link

bkansal commented May 2, 2019

Hello @ggovi,

What you say here hits our expertise and experience locally, and we would request a few clarification and help.

Please note that functions are not persistent members of the class...

Could you please tell us which class are you pointing to? The new functions/formulae have been added in the payload and they are read in the energyEMHad(..) function in PFEnergyCalibration.cc file in order to remove hard-coded eta dependencies.

Assuming that the de-serialisation works...

We are not aware of the technical term "de-serialisation" here, could you please explain it to us.
When the new code tries to look for function/formula in the "old" payload then it throws an exception, We are not sure if there are any fail-safe mechanism which can be employed in order to avoid such crash (although we are also not sure if that would be recommended by the reco group).

The behaviour of the old class while reading the new payloads should be unaltered. Please clarify.

Yes, to the best of our understanding.

Thanks.
Bhumika & Shubham

//added by bhumika Nov 2018
PFfcEta_BARRELH=3019, PFfcEta_BARRELEH = 3020,
PFfcEta_ENDCAPH = 3021, PFfcEta_ENDCAPEH = 3022,
PFfdEta_ENDCAPH = 3023, PFfdEta_ENDCAPEH =3024
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The relevant changes for the data persistency are the ones affecting the CondFormats. Here you have added values to the ResultType enum. Probably we don't have payloads from this class directly, but where ( in what other class depending on it ) it is used?

@bkansal
Copy link

bkansal commented May 3, 2019

Hi @ggovi

These enum numbers are used by PFEnergycalibration.cc[1] code.
This code passes the "ResultType" and "point" (to be evaluated from the formula) to getResult(..) function (defined in PerformancePayloadFromTFormula.cc[2]) , and getResult(..) returns the evaluated values if it finds the formula in the payload.

These enum numbers are also used to produce compatible payloads e.g. ProducePFCalibration.py[3] and ProducePFCalibrationObject.cc[4].

Thanks
Bhumika & Shubham

[1] https://github.com/spandeyehep/cmssw/blob/PFEnergyCalibration_code_eta_dependency_removal/RecoParticleFlow/PFClusterTools/src/PFEnergyCalibration.cc#L546
[2] https://cmssdt.cern.ch/lxr/source/CondFormats/PhysicsToolsObjects/src/PerformancePayloadFromTFormula.cc#0028
[3] https://github.com/spandeyehep/cmssw/blob/PFEnergyCalibration_code_eta_dependency_removal/RecoParticleFlow/PFClusterTools/test/ProducePFCalibration.py
[4] https://github.com/spandeyehep/cmssw/blob/PFEnergyCalibration_code_eta_dependency_removal/RecoParticleFlow/PFClusterTools/test/ProducePFCalibrationObject.cc#L76-L81

@ggovi
Copy link
Contributor

ggovi commented May 3, 2019

So, can you confirm that the above enum does not appear in any data member of class used for persistent objects ? In this case, the change should not create any problem.

@bkansal
Copy link

bkansal commented May 3, 2019

hi @ggovi

So, can you confirm that the above enum does not appear in any data member of class used for persistent objects ? In this case, the change should not create any problem.

To our best knowledge, these enum numbers are not used elsewhere apart from the above mentioned codes.

@ggovi
Copy link
Contributor

ggovi commented May 3, 2019

Ok, thanks!

@ggovi
Copy link
Contributor

ggovi commented May 3, 2019

+1

@cmsbuild
Copy link
Contributor

cmsbuild commented May 3, 2019

This pull request is fully signed and it will be integrated in one of the next master IBs (tests are also fine). This pull request will now be reviewed by the release team before it's merged. @davidlange6, @slava77, @smuzaffar, @fabiocos (and backports should be raised in the release meeting by the corresponding L2)

@fabiocos
Copy link
Contributor

fabiocos commented May 3, 2019

+1

@cmsbuild cmsbuild merged commit 45b5000 into cms-sw:master May 3, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet