Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use correct beamspot for 2016 MC + update MC tracker alignment scenario for 2016 pre-VFP era #28624

Merged
merged 6 commits into from Jan 8, 2020

Conversation

christopheralanwest
Copy link
Contributor

@christopheralanwest christopheralanwest commented Dec 13, 2019

PR description:

The MC that has been used to simulate 2016 data so far assumes a beamspot that is appropriate for describing the beamspot of data taken in 2015 with 25 ns bunch spacing. Previously the 2016 beamspot simulation was never changed to avoid inconsistencies between 2016 MC that had already been generated and any newly generated MC. There is no such consistency requirement for the UL as it will involve the regeneration of all GEN-SIM and so we can use the correct beamspot simulation. See the news from the 12 December 2019 PPD meeting for details.

The global tag updates change the BeamSpotObjectsRcd tag to BeamSpotObjects_Realistic25ns_13TeV2016Collisions_v1_mc for 2016 MC scenarios. The vertex smearing used by 2016 workflows has also been updated for consistency.

It also updates the MC tracker alignment scenario for the 2016 pre-VFP era.

Global tag diffs
https://cms-conddb.cern.ch/cmsDbBrowser/diff/Prod/gts/110X_mcRun2_design_v3/110X_mcRun2_design_v4
https://cms-conddb.cern.ch/cmsDbBrowser/diff/Prod/gts/110X_mcRun2_asymptotic_preVFP_v1/110X_mcRun2_asymptotic_preVFP_v3
https://cms-conddb.cern.ch/cmsDbBrowser/diff/Prod/gts/110X_mcRun2_asymptotic_v5/110X_mcRun2_asymptotic_v6
https://cms-conddb.cern.ch/cmsDbBrowser/diff/Prod/gts/110X_mcRun2cosmics_startup_deco_v4/110X_mcRun2cosmics_startup_deco_v5

PR validation:

The beamspot parameters are explicitly listed on slide 5 of the news from the 12 December 2019 PPD meeting together with fitted beamspot parameters from data and MC. In addition, a technical test was performed: runTheMatrix.py -l 7.22,limited --ibeos. Note that no RelVal workflows use either auto:run2_design or auto:run2_mc_l1stage1 (though the addOn tests use auto:run2_mc_l1stage1) so a similar technical test cannot be performed for these GTs.

if this PR is a backport please specify the original PR:

This PR is not a backport but will be backported to 11_0_X and 10_6_X.

@cmsbuild
Copy link
Contributor

The code-checks are being triggered in jenkins.

@cmsbuild
Copy link
Contributor

+code-checks

Logs: https://cmssdt.cern.ch/SDT/code-checks/cms-sw-PR-28624/13159

@cmsbuild
Copy link
Contributor

A new Pull Request was created by @christopheralanwest for master.

It involves the following packages:

Configuration/AlCa
Configuration/PyReleaseValidation

@cmsbuild, @chayanit, @zhenhu, @christopheralanwest, @tocheng, @pgunnell, @franzoni, @kpedro88, @tlampen, @pohsun can you please review it and eventually sign? Thanks.
@makortel, @Martin-Grunewald, @mmusich, @tocheng this is something you requested to watch as well.
@davidlange6, @slava77, @fabiocos you are the release manager for this.

cms-bot commands are listed here

@christopheralanwest
Copy link
Contributor Author

please test workflow 7.22

@cmsbuild
Copy link
Contributor

cmsbuild commented Dec 13, 2019

The tests are being triggered in jenkins.
Test Parameters:

@cmsbuild
Copy link
Contributor

+1
Tested at: 7b0f127
https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-a414fd/3972/summary.html
CMSSW: CMSSW_11_1_X_2019-12-13-1100
SCRAM_ARCH: slc7_amd64_gcc820

@cmsbuild
Copy link
Contributor

Comparison job queued.

@cmsbuild
Copy link
Contributor

Comparison is ready
https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-a414fd/3972/summary.html

@slava77 comparisons for the following workflows were not done due to missing matrix map:

  • /data/cmsbld/jenkins/workspace/compare-root-files-short-matrix/results/JR-comparison/PR-a414fd/7.22_Cosmics_UP16+Cosmics_UP16+DIGICOS_UP16+RECOCOS_UP16+ALCACOS_UP16+HARVESTCOS_UP16

Comparison Summary:

  • No significant changes to the logs found
  • Reco comparison results: 6639 differences found in the comparisons
  • DQMHistoTests: Total files compared: 34
  • DQMHistoTests: Total histograms compared: 2798405
  • DQMHistoTests: Total failures: 20614
  • DQMHistoTests: Total nulls: 0
  • DQMHistoTests: Total successes: 2777450
  • DQMHistoTests: Total skipped: 341
  • DQMHistoTests: Total Missing objects: 0
  • DQMHistoSizes: Histogram memory added: 0.654 KiB( 33 files compared)
  • DQMHistoSizes: changed ( 25202.0 ): 0.654 KiB SiStrip/MechanicalView
  • Checked 147 log files, 16 edm output root files, 34 DQM output files

@mmusich
Copy link
Contributor

mmusich commented Dec 14, 2019

@christopheralanwest thanks for this.
I have few comments:

  • indeed there is no RelVal wf which runs auto:run2_mc_l1stage1 but there is one addOn test which consumes it:
    'fastsim1' : ["cmsDriver.py TTbar_13TeV_TuneCUETP8M1_cfi --conditions auto:run2_mc_l1stage1 --fast -n 100 --eventcontent AODSIM,DQM --relval 100000,1000 -s GEN,SIM,RECOBEFMIX,DIGI:pdigi_valid,L1,DIGI2RAW,L1Reco,RECO,EI,VALIDATION --customise=HLTrigger/Configuration/CustomConfigs.L1THLT --datatier GEN-SIM-DIGI-RECO,DQMIO --beamspot NominalCollision2015 --era Run2_25ns"],
    As the vertex smearing there is NominalCollision2015 which in turn points to :
    'NominalCollision2015' : 'IOMC.EventVertexGenerators.VtxSmearedNominalCollision2015_cfi',
    i.e.
    NominalCollision2015VtxSmearingParameters = cms.PSet(
    Phi = cms.double(0.0),
    BetaStar = cms.double(65.0),
    Emittance = cms.double(5.411e-08),
    Alpha = cms.double(0.0),
    SigmaZ = cms.double(5.3),
    TimeOffset = cms.double(0.0),
    X0 = cms.double(0.0322),
    Y0 = cms.double(0.0),
    Z0 = cms.double(0.0)
    I am wondering if that should not be changed as well. Maybe @Martin-Grunewald can clarify what is the preference on the TSG side.
  • as for auto:run2_design (as it is also not exercised in any RelVal) we might take the occasion to move it to consume a really ideal BeamSpot as it is already done for the upgrade 2017 and 2018 workflows which point to Ideal_Centered_SLHC_v3. The dump is the following:
    image
  • the vertex smearing Realistic50ns13TeVCollision is used in combination with the autoCond key run2_mc in at least one unit test (Tau embedding):
    cmsDriver.py TauAnalysis/MCEmbeddingTools/python/EmbeddingPythia8Hadronizer_cfi.py --filein file:lhe_and_cleaned.root --fileout file:simulated_and_cleaned.root --conditions auto:run2_mc --era Run2_2016 --eventcontent RAWRECO --step GEN,SIM,DIGI,L1,DIGI2RAW,RAW2DIGI,RECO --datatier RAWRECO --customise TauAnalysis/MCEmbeddingTools/customisers.customiseGenerator --beamspot Realistic50ns13TeVCollision -n -1 --customise_commands "process.generator.nAttempts = cms.uint32(1000)\n" --python_filename simulation.py || die 'Failure during Simulation step' $?
    After this update the consistency in that test will be broken. I am wondering if there are other cases in cmssw.
  • last but not the least, the Realistic50ns13TeVCollision is used as default key
    VtxSmearedDefaultKey='Realistic50ns13TeVCollision'
    which in my understanding is used in a workflwow by the ConfigBuilder in case the Beam Spot is NOT specified, see
    self._options.beamspot=VtxSmearedDefaultKey

    I am wondering how many cases consistency will be broken....

@christopheralanwest
Copy link
Contributor Author

Thanks for your comments.

@christopheralanwest thanks for this.
I have few comments:

* indeed there is no RelVal wf which runs `auto:run2_mc_l1stage1` but there is one `addOn` test which consumes it: https://github.com/cms-sw/cmssw/blob/0d3768d63689ca8b98c9a08c206566e5eebf2bdb/Utilities/ReleaseScripts/scripts/addOnTests.py#L92
    As the vertex smearing there is `NominalCollision2015` which in turn points to : https://github.com/cms-sw/cmssw/blob/001ecd07d2fceae0f173a3ea169831f3171b03bb/Configuration/StandardSequences/python/VtxSmeared.py#L47
   i.e. https://github.com/cms-sw/cmssw/blob/001ecd07d2fceae0f173a3ea169831f3171b03bb/IOMC/EventVertexGenerators/python/VtxSmearedParameters_cfi.py#L459-L468
   I am wondering if that should not be changed as well. Maybe @Martin-Grunewald can clarify what is the preference on the TSG side.

In this case, I think that I should revert the change to the auto:run2_mc_l1stage1 GT but I will wait for a reply from @Martin-Grunewald.

* as for `auto:run2_design` (as it is also not exercised in any RelVal) we might take the occasion to move it to consume a really ideal BeamSpot as it is already done for the upgrade 2017 and 2018 workflows which point to `Ideal_Centered_SLHC_v3`. The dump is the following:
  ![image](https://user-images.githubusercontent.com/5082376/70847234-6b0f1400-1e62-11ea-8ca8-fa86b2377757.png)

OK. I will make this change, unless there are any objections.

* the vertex smearing `Realistic50ns13TeVCollision` is used in combination with the autoCond key `run2_mc` in at least one unit test (Tau embedding):  https://github.com/cms-sw/cmssw/blob/001ecd07d2fceae0f173a3ea169831f3171b03bb/TauAnalysis/MCEmbeddingTools/test/runtests.sh#L37

It looks like this is purely a unit test and so could be changed to use the Realistic25ns13TeV2016Collision vertex smearing for consistency with the other 2016 workflows. @isobelojalvo @mbluj Would that be OK?

   After this update the consistency in that test will be broken. I am wondering if there are other cases in cmssw.

* last but not the least, the `Realistic50ns13TeVCollision` is used as default key https://github.com/cms-sw/cmssw/blob/001ecd07d2fceae0f173a3ea169831f3171b03bb/Configuration/StandardSequences/python/VtxSmeared.py#L68
   which in my understanding is used in a workflwow by the ConfigBuilder in case the Beam Spot is **NOT** specified, see https://github.com/cms-sw/cmssw/blob/7e27750861f9d80a2254a83af0d1f9f94f7f803f/Configuration/Applications/python/ConfigBuilder.py#L995
  
  I am wondering how many cases consistency will be broken....

Using a 2015 beamspot does not seem to be a very sensible default from a physics standpoint but if the beamspot is not specified explicitly (hopefully) that means that the choice of beamspot is not particularly relevant for that workflow and only consistency is needed. I will defer to PdmV experts on whether or not the default beamspot should be changed.

@mbluj
Copy link
Contributor

mbluj commented Dec 16, 2019

* the vertex smearing `Realistic50ns13TeVCollision` is used in combination with the autoCond key `run2_mc` in at least one unit test (Tau embedding):  https://github.com/cms-sw/cmssw/blob/001ecd07d2fceae0f173a3ea169831f3171b03bb/TauAnalysis/MCEmbeddingTools/test/runtests.sh#L37

It looks like this is purely a unit test and so could be changed to use the Realistic25ns13TeV2016Collision vertex smearing for consistency with the other 2016 workflows. @isobelojalvo @mbluj Would that be OK?

I think that it is OK. @janekbechtel , could you confirm as embedding expert, please?

@cmsbuild
Copy link
Contributor

Comparison is ready
https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-a414fd/4052/summary.html

@slava77 comparisons for the following workflows were not done due to missing matrix map:

  • /data/cmsbld/jenkins/workspace/compare-root-files-short-matrix/results/JR-comparison/PR-a414fd/7.22_Cosmics_UP16+Cosmics_UP16+DIGICOS_UP16+RECOCOS_UP16+ALCACOS_UP16+HARVESTCOS_UP16

Comparison Summary:

  • No significant changes to the logs found
  • Reco comparison results: 6639 differences found in the comparisons
  • DQMHistoTests: Total files compared: 34
  • DQMHistoTests: Total histograms compared: 2813777
  • DQMHistoTests: Total failures: 20614
  • DQMHistoTests: Total nulls: 0
  • DQMHistoTests: Total successes: 2792822
  • DQMHistoTests: Total skipped: 341
  • DQMHistoTests: Total Missing objects: 0
  • DQMHistoSizes: Histogram memory added: 0.654 KiB( 33 files compared)
  • DQMHistoSizes: changed ( 25202.0 ): 0.654 KiB SiStrip/MechanicalView
  • Checked 147 log files, 16 edm output root files, 34 DQM output files

@christopheralanwest
Copy link
Contributor Author

+1

@chayanit
Copy link

+1

@civanch
Copy link
Contributor

civanch commented Dec 21, 2019

+1

@santocch
Copy link

+1

@kpedro88
Copy link
Contributor

kpedro88 commented Jan 7, 2020

+upgrade

@cmsbuild
Copy link
Contributor

cmsbuild commented Jan 7, 2020

This pull request is fully signed and it will be integrated in one of the next master IBs (tests are also fine). This pull request will now be reviewed by the release team before it's merged. @davidlange6, @slava77, @smuzaffar, @fabiocos, @silviodonato (and backports should be raised in the release meeting by the corresponding L2)

@silviodonato
Copy link
Contributor

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet