Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run 1 reco tuning #8691

Merged
merged 2 commits into from Apr 20, 2015
Merged

Run 1 reco tuning #8691

merged 2 commits into from Apr 20, 2015

Conversation

rovere
Copy link
Contributor

@rovere rovere commented Apr 9, 2015

This PR is meant to reproduce Run1-like tracking in CMSSW 75X. There are caveats, of course, and you should not expect to have exactly the same reconstruction code that we had in 53.

  • All the architectural changes have been kept, like the sane usage of EDProducers instead of the insane use of ES quantities.
  • In general all C++ related changes have been kept
  • JetCore iteration and ClusterSplitting have been removed
  • The order of the iterations has been restored
  • The Muon specific iterations have been kept but, if you want, you can use the earlyGeneralTracks that are produced before running these iterations
  • The usage of MVA has been disabled and the old selector cuts have been restored, taken from 5_3_27 (module human errors in diff-ing configuration files)
  • PixelVertices and PixelTracks have been both restored

In order to use the Run1-like tracking, please either add:

--customise RecoTracker/Configuration/customiseForRunI.customiseForRunI

to cmsDriver.py command, or:

from RecoTracker.Configuration.customiseForRunI import customiseForRunI

#call to customisation function customiseForRunI imported from RecoTracker.Configuration.customiseForRunI
process = customiseForRunI(process)

directly into your final python configuration file.

@cmsbuild
Copy link
Contributor

cmsbuild commented Apr 9, 2015

A new Pull Request was created by @rovere (Marco Rovere) for CMSSW_7_5_X.

Run 1 reco tuning

It involves the following packages:

RecoTracker/Configuration
RecoTracker/IterativeTracking

@cmsbuild, @cvuosalo, @nclopezo, @slava77 can you please review it and eventually sign? Thanks.
@ghellwig, @makortel, @GiacomoSguazzoni, @VinInn, @appeltel, @mschrode, @gpetruc, @cerati, @dgulhan, @venturia this is something you requested to watch as well.
You can sign-off by replying to this message having '+1' in the first line of your reply.
You can reject by replying to this message having '-1' in the first line of your reply.
If you are a L2 or a release manager you can ask for tests by saying 'please test' in the first line of a comment.
@nclopezo you are the release manager for this.
You can merge this pull request by typing 'merge' in the first line of your comment.

@slava77
Copy link
Contributor

slava77 commented Apr 9, 2015

@cmsbuild please test

@slava77
Copy link
Contributor

slava77 commented Apr 9, 2015

We need to discuss if this stays as an experimental customize or if it becomes a regularly tested setup (with corresponding relvals etc).
Also, if it's beyond experimental setup, it will need to be made era-aware

@cmsbuild
Copy link
Contributor

cmsbuild commented Apr 9, 2015

The tests are being triggered in jenkins.

@rovere
Copy link
Contributor Author

rovere commented Apr 9, 2015

Ciao Slava,
I agree. Let's first see if this turns out to be somehow useful to anybody. If yes I can make it era-aware, with some guidance.
As for testing, unless you activate it, I hope there will be no changes 😃

@cmsbuild
Copy link
Contributor

cmsbuild commented Apr 9, 2015

process.globalreco.replace(process.siPixelClusterShapeCachePreSplitting, process.siPixelClusterShapeCache)

# Now restore pixelVertices wherever was not possible with an ad-hoc RunI cfg
process.muonSeededTracksInOutSelector.vertices = 'pixelVertices'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these were not in RunI (53X).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ciao Slava,
I am not sure what you mean here: which line are you referring to, exactly?
As I mentioned in the comment, this is not meant to be a 1:1 replica of RunI tracking, but the best approximation we could have, including all C++ and class layout changes that have been done in between.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean the muonSeededTracksInOutSelector

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ciao Slava,
getting rid of them would have been more work than added value, I fear. As I wrote in the initial comment, if you want to ignore them, you can use the earlyGeneralTrack collection, which is produced before running the muon seeded iterations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fine, I understand.

@slava77
Copy link
Contributor

slava77 commented Apr 18, 2015

+1

for #8691 ed0a3c1
I checked (in CMSSW_7_5_0_pre2 /test area test8691/) that it runs, to see what the impact is for the PU35@25ns scenario.

The total time per event increase from 17 s to 28 s.

  • about + 10 s is from increase in time in tobTecStepTrackCandidates and pixelLessStepTrackCandidates
  • a few other modules (pf, muon etc) driven by track multiplicity account for +1 -- +2 s
  • jetcore and other removed sequences account for 1.9 s

no surprises here

@cmsbuild
Copy link
Contributor

This pull request is fully signed and it will be integrated in one of the next CMSSW_7_5_X IBs unless changes (tests are also fine). This pull request requires discussion in the ORP meeting before it's merged. @davidlange6, @nclopezo, @ktf, @smuzaffar

@davidlange6
Copy link
Contributor

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants