Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ecal PFCluster corrections (74x) #7558

Merged
merged 6 commits into from Feb 5, 2015

Conversation

bendavid
Copy link
Contributor

@bendavid bendavid commented Feb 4, 2015

Adapt PFCluster energy correction code to use new regression weights in the conditions database (current implementation is using ESPrefer until this is available in a GT)

The PFAlgo also needed to be modified since it was still using hardcoded inlined corrections (taking the old corrections) instead of taking the corrected energy filled by the more modular correction scheme which @lgray added a while back.

The DataFormat change to CaloCluster adds just an additional float to store the uncertainty associated with the corrected energy (which is a new feature of the regression-based corrections which is not currently used anywhere, but intended to be used as input to the higher level electron and photon corrections)

Two caveats:

  1. The dataformat change introduces some potential issues with checksum and class version divergences and incompatibility once we try to backport this to 73x. See https://hypernews.cern.ch/HyperNews/CMS/get/edmFramework/3433/1.html

  2. I am still running some internal closure tests against the standalone validation code for the regression training (to make sure there are no trivial mistakes like variables being defined in a way which doesn't match the training, or passed in the wrong order to the BDT's in the CMSSW class). Assuming that these checks will be positive, physics checks of this pull request can already start in principle. (but would maybe hold off merging it until the outcome of those tests is confirmed)

…ulated or stored for now to avoid changing calocluster dataformat), conditions from esprefer for now
…o propagates to any derived CaloClusters associated with the GED electron and photon objects)
@cmsbuild
Copy link
Contributor

cmsbuild commented Feb 4, 2015

A new Pull Request was created by @bendavid (Josh Bendavid) for CMSSW_7_4_X.

Ecal PFCluster corrections (74x)

It involves the following packages:

DataFormats/CaloRecHit
DataFormats/EgammaReco
DataFormats/ParticleFlowReco
RecoParticleFlow/PFClusterProducer
RecoParticleFlow/PFProducer

@cmsbuild, @cvuosalo, @nclopezo, @slava77 can you please review it and eventually sign? Thanks.
@mmarionncern, @Sam-Harper, @bachtis, @lgray this is something you requested to watch as well.
You can sign-off by replying to this message having '+1' in the first line of your reply.
You can reject by replying to this message having '-1' in the first line of your reply.
If you are a L2 or a release manager you can ask for tests by saying 'please test' in the first line of a comment.

@slava77
Copy link
Contributor

slava77 commented Feb 4, 2015

@cmsbuild please test

I'll also start some higher stat tests on our side.
(a note for the future) Josh, considering this is submitted past the deadline, some heads up would be nice.

@cmsbuild
Copy link
Contributor

cmsbuild commented Feb 4, 2015

The tests are being triggered in jenkins.

@bendavid
Copy link
Contributor Author

bendavid commented Feb 4, 2015

After comments from @Dr15Jones on the hypernews thread, there are going to be some updates shortly to class versions and checksums (which anyways are orthogonal to the performance tests)

cms.PSet(
record = cms.string('GBRDWrapperRcd'),
label = cms.untracked.string('GBRForestD_ecalPFClusterCor_EB_pfSize1_mean_50ns'),
tag = cms.string('GBRForestD_ecalPFClusterCor_EB_pfSize1_mean_50ns')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so, the payloads are available, just not in a GT yet, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of adding a new PoolDBESSource module, you could add them directly to the global tag module (at least, years ago I was told that having two PoolDBESSource modules in the same configuration is Bad).

If you prefer to keep these in their own PoolDBESSource module, make sure that cmsDriver loads it together with the other condtions.

@slava77
Copy link
Contributor

slava77 commented Feb 4, 2015

An exception of category 'Configuration' occurred while
   [0] Constructing the EventProcessor
   [1] Validating configuration of module: class=CorrectedECALPFClusterProducer label='hltParticleFlowClusterECALL1Seeded'
Exception Message:
Illegal parameter found in configuration.  The parameter is named:
 'applyCrackCorrections'
You could be trying to use a parameter name that is not
allowed for this plugin or it could be misspelled.
----- End Fatal Exception -------------------------------------------------

runTheMatrix.py -l 25.0 --useInput all
to reproduce
You should keep the parameter until ConfDB is updated and the HLT configs are updated.

@slava77
Copy link
Contributor

slava77 commented Feb 4, 2015

-1

@bendavid
Copy link
Contributor Author

bendavid commented Feb 5, 2015

(I would suggest to hold off on the 73x backport + sample production until we are sure we're happy)

@cmsbuild
Copy link
Contributor

cmsbuild commented Feb 5, 2015

@cmsbuild
Copy link
Contributor

cmsbuild commented Feb 5, 2015

This pull request is fully signed and it will be integrated in one of the next CMSSW_7_4_X IBs unless changes (tests are also fine). This pull request requires discussion in the ORP meeting before it's merged. @davidlange6, @Degano, @ktf, @smuzaffar

@davidlange6
Copy link
Contributor

+1

@mengleisun
Copy link
Contributor

Hi, @bendavid @emanueledimarco : I have fixed the R9 and submitted a PR #7595 for that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet