Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tk seeding #2884

Merged
merged 27 commits into from Mar 20, 2014
Merged

Tk seeding #2884

merged 27 commits into from Mar 20, 2014

Conversation

VinInn
Copy link
Contributor

@VinInn VinInn commented Mar 17, 2014

Migration of Seeders:
Currently Hits in the collections created by LocalReco are transformed in TTRH and stored in the SeedingLayerSetsHits.
SeedingHits are then created. Those who survives are refitted and stored in the TrajectorySeed (as normal TRH in a OwnVector).
Migration strategy consists in

  1. store in the SeedingLayerSetsHits simple pointers to the TRH created by LocalReco and already in the event.
  2. Proceed as usual
  3. and then store in the final TrajectorySeed their clone created during refit.
    This should reduce memory use and memory-churn.
    In addition we use as base type "BasicTrackerRecHit" that has very little virtual methods:
    all access to position etc will then be immediate and inlined.
    caveat: in the process some new hits are created:
  4. by the HitExtractor if is required to keep one of the components of a partially masked MatchedHIt:
    this option is off, and one can even wonder if shall be kept...
  5. by "on demand" regions: Rectangular and Cosmic that ask TTRH to the measurementDet
  6. by the MultiHitGenerator when refitting (most probably we could then create the triplets with the original hits)
    in addition there are a couple of (obsolete?) Seeders that have not been migrated yet to the SeedingLayerSetsHits interface.
    for 2 and 3 a simple cache of unique_ptr suffices: the lifetime of region matches exactly the required one.
    MultiHitGenerator already has a cache for the triplets and a clear method.
    to manage the one in SeedingLayerSetsHits is a bit more complex, so I decided to introduce a light weight smart pointer that "may own"
    an object.
    This is also needed to support the old Layers set interface.
    We may decide later to simplify this whole infrastructure in particular if "projected hit" in layers are not really needed (and so we can move to a less optimal double cache also in SeedingLayerSetsHits)
    Famos:
    Famos hits needed to be migrated as well to inherit from BasicTrackerRecHit.
    Migration has been completed.
    Tests show no regression.
    Valgrind shows no memory-leak
    runTheMatrix.py --useInput=all --list=limited:
    7 7 6 6 4 tests passed, 0 0 0 0 0 failed
    full matrix
    242 226 192 128 16 tests passed, 0 5 0 0 0 failed
    the 5 failures are not related to reco (EOS, parameter set)

At the moment there are several questions still open: so some short cuts have been taken to avoid migrations in other part of the code
that may not be needed at the end. In particular I avoided any backward incompatible change to configuration.
A cleanup is required before going in production.
Before a final cleanup and stabilization of code and calling sequences we need

  1. to review the status of all the Seeders. remove those obsolete, migrate to the SeedingLayerSetsHits interface those still pending.
  2. clean the code from the old on demand mechanism
  3. decide if we need to support Famos fake-clusters more generally (in OmniClusterRef)
  4. make MeasurementDet able to produce standard RecHit on the fly

next step shall be clean up code from the old on-demand mechanism (LazyGetter)
Once this is done the migration of MeasurementDets to return RecHit can start.
This will then led to the removal of TTRK also from Ctf.
we need also to look if we can reduce the size of Famos GSTkHits:
any by value storage will be dominated by the largest possible object: at the moment Famos hits dominate by large...

@VinInn
Copy link
Contributor Author

VinInn commented Mar 19, 2014

the standard DQM for 5.1
https://cmssdt.cern.ch/SDT/jenkins-artifacts/baseLineComparisons/CMSSW_7_1_X_2014-03-14-0200+2884/1703/5.1_TTbar+TTbarFS+HARVESTFS/comparisonResults/

showed no regression what so ever.
Given Slava plots I wander how

I run
5.1_TTbar+TTbarFS+HARVESTFS for 1000 events and
I have now new step1_inDQM.root and DQM_V0001_R000000001__Global__CMSSW_X_Y_Z__RECO.root

what I do with those?
any script to dig into them and produce summary and pngs?

@lveldere
Copy link
Contributor

Thanks Vincenzo

That's the workflow I would suggest you to run.
I wish I knew how to produce such summary.
I'll try to find that out somewhere soon, O(days - weeks)

In the meanwhile,
If you can produce some comparison plots before vs after w/o too much hassle, great.
If not, I'll approve and we wait for the relvals.

Lukas

@lveldere
Copy link
Contributor

+1

Vincenzo showed me validation in private.
Thanks!

@VinInn
Copy link
Contributor Author

VinInn commented Mar 19, 2014

I compared the output of 5.1 w.r.t latest IB.
No regression.

@diguida
Copy link
Contributor

diguida commented Mar 19, 2014

+1
AlCa code marginally touched. TTRH are removed and changed with TRH in two member functions' signatures.

@slava77
Copy link
Contributor

slava77 commented Mar 20, 2014

+1

for #2884 d0595eb

The substantive regression in FastSim is gone

There are some epsilon-level changes in HLT DQM plots, all apparently from one single track out of ~1k changing parameters. Too small to worry about.

@cmsbuild
Copy link
Contributor

This pull request is fully signed and it will be integrated in one of the next CMSSW_7_1_X IBs unless changes (tests are also fine). @nclopezo, @ktf can you please take care of it?

ktf added a commit that referenced this pull request Mar 20, 2014
Reco speedup -- Tk seeding
@ktf ktf merged commit 77a62b2 into cms-sw:CMSSW_7_1_X Mar 20, 2014
@VinInn VinInn deleted the TkSeeding branch April 21, 2017 11:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants