Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

data processing setup for 2017 era #17635

Conversation

slava77
Copy link
Contributor

@slava77 slava77 commented Feb 26, 2017

Added scenarios:

  • cosmicsEra_Run2_2017
  • hcalnzsEra_Run2_2017
  • ppEra_Run2_2017
  • ppEra_Run2_2017_trackingLowPU this can be used for runs with strips and no pixel tracker (as long as the pileup is reasonably low)

Additional changes to make dataProcessing to work:

  • switch to using pickle instead of python.dump in local test config generators to get runnable configuration; pickle is used in T0 config generation already
  • skip calls in customizeHLTForPFTrackingPhaseI2017 if hltPixelLayerTriplets are not available (to resolve Skims_PDWG_cff broken by load of HLT_Fake1_cff in 2017 Eras #17634). It looks like the load of HLT_Fake1_cff is clean otherwise, in a sense that it does not modify the main process object

Tested with an "uber-config"
2016 era:

python $CMSSW_BASE/src/Configuration/DataProcessing/test/RunPromptReco.py --scenario ppEra_Run2_2016\
 --reco --aod --miniaod --dqmio --global-tag 90X_dataRun2_relval_v4 \
--lfn=file:JetHT_Run2016B_RAW_274199_2C69107A-BD26-E611-911C-02163E0146B3.root  \
--alcareco SiStripCalZeroBias+SiStripCalMinBias+TkAlMinBias+HcalCalDijets+HcalCalIsoTrkFilter+HcalCalNoise+TkAlUpsilonMuMu+TkAlJpsiMuMu+EcalCalZElectron+EcalUncalZElectron+HcalCalIterativePhiSym+TkAlZMuMu+MuAlCalIsolatedMu+MuAlOverlaps+MuAlZMuMu+DtCalib+HcalCalDijets+EcalCalWElectron+EcalUncalWElectron+EcalESAlign+TkAlMuonIsolated+HcalCalHO+HcalCalGammaJet \
 --dqmSeq @common+@muon+@jetmet+@hcal  \
--PhysicsSkim=TopMuEG+LogError+LogErrorMonitor+HighMET+ZMu+MuTau

Similar config for 2017 parses OK but doesn't run for data due to GT inconsistencies.
Similar config for 2017 with MC runs OK (with HcalCalIterativePhiSym removed)

@cmsbuild
Copy link
Contributor

A new Pull Request was created by @slava77 (Slava Krutelyov) for CMSSW_9_0_X.

It involves the following packages:

Configuration/DataProcessing
Configuration/Eras
HLTrigger/Configuration

@perrotta, @cmsbuild, @silviodonato, @Martin-Grunewald, @franzoni, @fwyzard, @davidlange6 can you please review it and eventually sign? Thanks.
@ghellwig, @makortel, @geoff-smith, @jalimena, @Martin-Grunewald this is something you requested to watch as well.
@davidlange6, @smuzaffar you are the release manager for this.

cms-bot commands are listed here #13028

@slava77
Copy link
Contributor Author

slava77 commented Feb 26, 2017

@cmsbuild please test

@cmsbuild
Copy link
Contributor

cmsbuild commented Feb 26, 2017

The tests are being triggered in jenkins.
https://cmssdt.cern.ch/jenkins/job/ib-any-integration/17973/console Started: 2017/02/26 20:21

@cmsbuild
Copy link
Contributor

@cmsbuild
Copy link
Contributor

Comparison job queued.

@cmsbuild
Copy link
Contributor

@@ -27,6 +27,9 @@ def modifyHLTPhaseIPixelGeom(process):

# modify the HLT configuration to run the Phase I tracking in the particle flow sequence
def customizeHLTForPFTrackingPhaseI2017(process):
if not hasattr(process, 'hltPixelLayerTriplets'):
#there could also be a message here that the call is done for non-HLT stuff
return process;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it because some HLT configurations do not have hltPixelLayerTriplets & co., or because this ends up being callen for non-HLT configurations ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the latter, this file is loaded (indirectly) by Configuration.Skimming.Skims_PDWG_cff in non-HLT configuration.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see - but the Era-based customisation should still affect only the HLT part, right ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but it can't customise something that doesn't already exist in the HLT part. That's why this "if hasattr" is here

@Martin-Grunewald
Copy link
Contributor

+1

from Configuration.Eras.Era_Run2_2017_cff import Run2_2017
from Configuration.Eras.Modifier_trackingLowPU_cff import trackingLowPU

Run2_2017_trackingLowPU = cms.ModifierChain(Run2_2017, trackingLowPU)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here Run2_2017_core should be used instead of Run2_2017. Adding trackingLowPU will likely have strange effects as trackingPhase1QuadProp is already active, and several automated constructs assume only one of the tracking sub-eras being active (hmm, maybe I should try to think how to add a check for that...). OTOH, run2_GEM_2017 is currently only in Run2_2017 while IMHO it should be in Run2_2017_core. Further OTOH, after #17605 and with #17612 we should anyway simplify the era hierarchy (I guess it's mostly up to deciding to who does the job).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can do the changes, but it sounds like it's more than just changing to Run2_2017_core

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing all the cleanup/restructuring, yes, that would be more than switching to run2_2017_core. I can also do the cleanup part if you prefer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@makortel if you can do the cleanup using copyAndExclude, including the fix I mentioned in #15496 for the 2018 Era, that would be great

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the cleanup was made already, will I still need to change this to run2_2017_core or will this part change as well?
If there is coupling, I'd rather leave this as is.
Matti, please clarify.
Thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@slava77 It will need a change, which will conflict with changing it to run2_2017_core. Because of the coupling, I agree it's easiest to leave as it is (since it wll be fixed soon).

Other thought: if trackingLowPU configuration should be kept working also in 2017, it would probably be good to add a matrix workflow for it, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, a workflow with it would be good to have (not in the short matrix used by jenkins though)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is the cleanup PR #17644. I suggest we let these two go independently, and then I(?) make a further PR fixing Run2_2_2017_trackingLowPU and add the workflow for IBs. (or if this PR gets merged first, I can make these subsequent developments in #17644).

@franzoni
Copy link

@kpedro88 and @abdoulline may want to follow this as well

In order to test meaningfully ppEra_Run2_2017 we've created a candidate global tag

90X_dataRun2_Prompt_Candidate_2017_02_27_07_32_56_ONLY_FOR_TESTS

including, on top of what was used for the replay with 2016-like conditions ( diff with respect to 90X_dataRun2_Prompt_v1 ) :

. the GEM geometry
. the HCAL plan1 geometry
. the HCAL plan1 conditions (from this hn; we are iterating on possible modifications to these tags, which however can provide conditiosn for tests)

Testing such GT on a recent run from MWGR1 using CMSSW_9_0_X_2017-02-25-1100 + this PR:

python $CMSSW_BASE/src/Configuration/DataProcessing/test/RunPromptReco.py --scenario ppEra_Run2_2017 \
 --reco --aod --miniaod --dqmio --global-tag 90X_dataRun2_Prompt_Candidate_2017_02_27_07_32_56_ONLY_FOR_TESTS \
--lfn=file:/afs/cern.ch/user/f/franzoni/public/Commissioning2017-MinimumBias-RAW-v1-000-287-961-00000/12ADDEC7-0DF9-E611-B2D9-02163E01A4BB.root  \
--alcareco SiStripCalZeroBias+SiStripCalMinBias+TkAlMinBias+HcalCalDijets+HcalCalIsoTrkFilter+HcalCalNoise+TkAlUpsilonMuMu+TkAlJpsiMuMu+EcalCalZElectron+EcalUncalZElectron+HcalCalIterativePhiSym+TkAlZMuMu+MuAlCalIsolatedMu+MuAlOverlaps+MuAlZMuMu+DtCalib+HcalCalDijets+EcalCalWElectron+EcalUncalWElectron+EcalESAlign+TkAlMuonIsolated+HcalCalHO+HcalCalGammaJet \
 --dqmSeq @common+@muon+@jetmet+@hcal  \
--PhysicsSkim=TopMuEG+LogError+LogErrorMonitor+HighMET+ZMu+MuTau

we get an error
cmsRun: /build/cmsbld/jenkins-workarea/workspace/build-any-ib/w/tmp/BUILDROOT/ba27bd8363d79ce0030a667df5803d63/opt/cmssw/slc6_amd64_gcc530/cms/cmssw-patch/CMSSW_9_0_X_2017-02-25-1100/src/CalibCalorimetry/HcalAlgos/interface/HcalSiPMnonlinearity.h:10: HcalSiPMnonlinearity::HcalSiPMnonlinearity(const std::vector<float>&): Assertion pars.size() == 3' failed.` . We got the same error when adding only the GEM geometry and using the 2016 HCAL conditions.
We won't be able to debug this much today, for conflict with the AlCa/Db workshop.

@franzoni
Copy link

we also have a candiate GT from the HTL queue:

90X_dataRun2_HLT_Candidate_2017_02_27_07_32_58_ONLY_FOR_TESTS

@degrutto, please take note of it.

We have not done equivalent tests to what done for prompt.

@slava77
Copy link
Contributor Author

slava77 commented Feb 27, 2017

@franzoni @davidlange6 (operations signatories) please check and sign or suggest changes if needed.
Thank you.

@davidlange6
Copy link
Contributor

was just waiting for the discussion to conclude.

@davidlange6 davidlange6 merged commit 88940d2 into cms-sw:CMSSW_9_0_X Feb 27, 2017
@deguio
Copy link
Contributor

deguio commented Feb 27, 2017

DISCARD THIS MESSAGE

@franzoni
the run you are using was most likely a bad one. Moving to a global run with HEP17 which we know was a long and good one, solved the issue for me. here the command I used:

python $CMSSW_BASE/src/Configuration/DataProcessing/test/RunPromptReco.py --scenario ppEra_Run2_2017 --reco --aod --miniaod --dqmio --global-tag 90X_dataRun2_Prompt_Candidate_2017_02_27_07_32_56_ONLY_FOR_TESTS --lfn=/store/data/Commissioning2017/Cosmics/RAW/v1/000/287/167/00000/0EE39B24-8AEF-E611-B712-02163E019CB0.root --alcareco SiStripCalZeroBias+SiStripCalMinBias+TkAlMinBias+HcalCalDijets+HcalCalIsoTrkFilter+HcalCalNoise+TkAlUpsilonMuMu+TkAlJpsiMuMu+EcalCalZElectron+EcalUncalZElectron+HcalCalIterativePhiSym+TkAlZMuMu+MuAlCalIsolatedMu+MuAlOverlaps+MuAlZMuMu+DtCalib+HcalCalDijets+EcalCalWElectron+EcalUncalWElectron+EcalESAlign+TkAlMuonIsolated+HcalCalHO+HcalCalGammaJet --dqmSeq @common+@muon+@jetmet+@hcal --PhysicsSkim=TopMuEG+LogError+LogErrorMonitor+HighMET+ZMu+MuTau

@deguio
Copy link
Contributor

deguio commented Feb 27, 2017

we are anyway going to investigate what the source of the problem is

@abdoulline
Copy link

@deguio
Federico, the problem is that new IOV for Plan 1 conditions starts from 287446...
So if you took 287167 - conditions are yet Run 2 (2016) ones and Geometry must be also Run 2...

@deguio
Copy link
Contributor

deguio commented Feb 27, 2017

you are right, I realized that. wrong file.
I wanted to test on 287446 which is being transferred to T2_CERN. I can try as soon as it is available.
sorry for the confusion.

@deguio
Copy link
Contributor

deguio commented Feb 27, 2017

let me report here as well.

I've tried with a global run from yesterday that @abenagli provided:

here is the file:
~deguio/public/ForKevin/USC_288236_streamDQM.root --> repacked from a streamer file from p5

here is the command:
python $CMSSW_BASE/src/Configuration/DataProcessing/test/RunPromptReco.py --scenario ppEra_Run2_2017 --reco --aod --miniaod --dqmio --global-tag 90X_dataRun2_Prompt_Candidate_2017_02_27_07_32_56_ONLY_FOR_TESTS --lfn=file:USC_288236_streamDQM.root --alcareco SiStripCalZeroBias+SiStripCalMinBias+TkAlMinBias+HcalCalDijets+HcalCalIsoTrkFilter+HcalCalNoise+TkAlUpsilonMuMu+TkAlJpsiMuMu+EcalCalZElectron+EcalUncalZElectron+HcalCalIterativePhiSym+TkAlZMuMu+MuAlCalIsolatedMu+MuAlOverlaps+MuAlZMuMu+DtCalib+HcalCalDijets+EcalCalWElectron+EcalUncalWElectron+EcalESAlign+TkAlMuonIsolated+HcalCalHO+HcalCalGammaJet --dqmSeq @common+@muon+@jetmet+@hcal --PhysicsSkim=TopMuEG+LogError+LogErrorMonitor+HighMET+ZMu+MuTau

few events, but no crashes and clean logs (at least from hcal side).

@deguio
Copy link
Contributor

deguio commented Feb 27, 2017

this is understood. reporting below the explanation by @abenagli.
bottom line: this file can be used for testing
~deguio/public/ForKevin/USC_288236_streamDQM.root


the unpacker is fine, as well as the emap.

If run 287961 was used for these checks, a pure ngHE F/W was loaded into crate 11, so I’m guessing that all channels of that µHTR have been flagged as “ng” by the µHTR, including the HB ones,
and consequently tentatively unpacked as such.

A first working version of the mixed uHTR F/W has been deployed in crate 11 on Feb. 24th, so only runs after that date should be used for reconstruction tests.

In particular, Sun > Mon overnight run is a good candidate (288236).
Prompt reco might not be available yet, but the file indicated by Fede can be used in the meanwhile:

~deguio/public/ForKevin/USC_288236_streamDQM.root

@franzoni
Copy link

Thanks @deguio for the explanations and additional tests
We'll proceed with composing GT's for HLT, express and prompt, and pass them on to the relevant teams

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet