Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix FE chip stub truncation in TTStubBuilder #30648

Merged
merged 7 commits into from Jul 28, 2020

Conversation

tomalin
Copy link
Contributor

@tomalin tomalin commented Jul 11, 2020

This fixes several issues with simulation of truncation of stubs in the front-end tracker chips in the TTStubBuilder. (N.B. Until this PR, truncation has always been disabled by default, so these issues had no effect on official MC production).

  1. It eliminates a bug that was introduced in CMSSW 11.0
  2. It updates the list of which tracker modules have 10Gb/s vs 5Gb/s opto-link readout.
  3. It moves truncated stubs to the "StubRejected" collection (in case needed for debugging), whereas previously they were left in the "StubAccepted" collection, meaning they could incorrectly be used for L1 tracking.
  4. It adds a new collection "ClusterRejected" of clusters associated to the "StubRejected" stubs.
  5. It cleans up and shortens the code.
  6. Python cfg keeps FE stub truncation disabled by default.

N.B. At some point in the future, we should update further to take opto-link speed from DB.

PR validation:
Have created stubs in multithreaded job and run L1 tracking on them.

@cmsbuild
Copy link
Contributor

@tomalin, CMSSW_11_2_X branch is closed for direct updates. cms-bot is going to move this PR to master branch.
In future, please use cmssw master branch to submit your changes.

@cmsbuild
Copy link
Contributor

The code-checks are being triggered in jenkins.

@cmsbuild
Copy link
Contributor

+code-checks

Logs: https://cmssdt.cern.ch/SDT/code-checks/cms-sw-PR-30648/16926

@cmsbuild
Copy link
Contributor

A new Pull Request was created by @tomalin (Ian Tomalin) for master.

It involves the following packages:

L1Trigger/TrackTrigger
SimTracker/TrackTriggerAssociation

@benkrikler, @civanch, @kpedro88, @cmsbuild, @rekovic, @mdhildreth can you please review it and eventually sign? Thanks.
@makortel, @mtosi, @GiacomoSguazzoni, @JanFSchulte, @rovere, @VinInn, @Martin-Grunewald, @sviret, @ebrondol, @threus, @dgulhan this is something you requested to watch as well.
@silviodonato, @dpiparo you are the release manager for this.

cms-bot commands are listed here

@kpedro88
Copy link
Contributor

please test

@cmsbuild
Copy link
Contributor

cmsbuild commented Jul 11, 2020

The tests are being triggered in jenkins.

@cmsbuild
Copy link
Contributor

+1
Tested at: c172369
https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-7bddd0/7881/summary.html
CMSSW: CMSSW_11_2_X_2020-07-11-1100
SCRAM_ARCH: slc7_amd64_gcc820

@cmsbuild
Copy link
Contributor

Comparison job queued.

@cmsbuild
Copy link
Contributor

Comparison is ready
https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-7bddd0/7881/summary.html

Comparison Summary:

  • No significant changes to the logs found
  • Reco comparison results: 0 differences found in the comparisons
  • DQMHistoTests: Total files compared: 37
  • DQMHistoTests: Total histograms compared: 2787429
  • DQMHistoTests: Total failures: 80
  • DQMHistoTests: Total nulls: 0
  • DQMHistoTests: Total successes: 2787299
  • DQMHistoTests: Total skipped: 50
  • DQMHistoTests: Total Missing objects: 0
  • DQMHistoSizes: Histogram memory added: 0.0 KiB( 36 files compared)
  • Checked 154 log files, 17 edm output root files, 37 DQM output files

L1Trigger/TrackTrigger/plugins/TTStubBuilder.cc Outdated Show resolved Hide resolved
L1Trigger/TrackTrigger/plugins/TTStubBuilder.cc Outdated Show resolved Hide resolved
bool lowerOK = false;
bool upperOK = false;

for (clusterIter = lowerClusters.begin(); clusterIter != lowerClusters.end() && !lowerOK; ++clusterIter) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

range-based loop

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The call to edmNew::makeRefTo() (defined in DetSetNew.h) inside this loop demands an iterator to the cluster as its argument. The range-based loop provides this, whereas a C++11 type loop would not.

}
}

for (clusterIter = upperClusters.begin(); clusterIter != upperClusters.end() && !upperOK; ++clusterIter) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

range-based loop

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Identical response as previous comment to #30648 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tomalin , I just have signed the PR, however, thinking more I would say, that this loop is potentially dangerous, because "upperClusters.end() && !upperOK" may never happens. If I am wrong and this happens in all cases, why you check "!upperOK" at each iteration? If it is needed to interrupt the loop earlier it is possible to do "break" inside the loop.

L1Trigger/TrackTrigger/plugins/TTStubBuilder.cc Outdated Show resolved Hide resolved
@civanch
Copy link
Contributor

civanch commented Jul 21, 2020

+1

@makortel
Copy link
Contributor

The truncation depends on the number of stubs in the previous 8 events. Since with multithreading, the order of the events can change, it meant that with truncation enabled, running the same job twice could give different results.

I just want to comment that in simulation even with one thread the adjacent events do not correspond to adjacent bunch crossings (assuming the "previous 8 events" really means "previous 8 bunch crossings"), so simulating such an effect would require a different approach (that, if I recall correctly, is being used by other subdetector(s)).

@tomalin
Copy link
Contributor Author

tomalin commented Jul 22, 2020

@makortel If you do remember somewhere else in CMS, where the hit readout efficiency in one bunch crossing depends on the number of hits in the previous few bunch crossings, and this was simulated in a robust fashion, please let me know. We've never found a good way of doing this. Though any such improvement would be for a future PR, not for this one.

@VinInn
Copy link
Contributor

VinInn commented Jul 22, 2020

I do not think that simhit/digi merging is affected by multi-threading.
We simulate events passing a trigger: previous events do not mean previous bunches.
All this should be managed at hit/digi merging step.

@davidlange6
Copy link
Contributor

davidlange6 commented Jul 22, 2020 via email

@makortel
Copy link
Contributor

Pixels do this I believe. Anyway the previous few bunch crossings are available within the mixing module in digis (iirc).

My recollection points to pixels as well. Anyway the digitizers in MixingModule have access to the full pattern of SimHits from the simulated out-of-time pileup bunch crossings (that is, IIRC, -12..+3 for Runs 1-3 and -3...+3 for Run 4).

@kpedro88
Copy link
Contributor

Indeed, the correct approach is using the previous bunch crossings provided during pileup mixing.

There are some examples of accessing bunch crossing information from the PileupSummaryInfo object here: https://github.com/cms-sw/cmssw/blob/master/SimTracker/SiPixelDigitizer/plugins/SiPixelDigitizerAlgorithm.cc

@mdhildreth
Copy link
Contributor

mdhildreth commented Jul 23, 2020

Hi All - Unfortunately, to make this work properly, you actually need to make digis and then run the code to generate the L1 Track Trigger primitives (stubs) for all of the preceding out-of-time bunch crossings, assuming that the sim hits in those crossings are actually "in-time" as far as the electronics are concerned. This breaks PreMixing, for example, since all of the sim hits would have to be saved in the PreMixed files. Just accessing the average occupancy in previous bunch crossings from the PileupSumaryInfo object doesn't provide enough information.

We could do this in a stand-alone mode, which is the eventual plan. For production, I think the only way to make this work is to just have an average truncation probability based on expected occupancy, like the Pixel code.

@kpedro88
Copy link
Contributor

Does that imply that the primitives/stubs generation should be run during premix stage 1 and provide a collection of out-of-time stubs that should be consumed during premix stage 2?

@mdhildreth
Copy link
Contributor

But then you don't have the overlap between pileup stubs and the hard scatter...

@mdhildreth
Copy link
Contributor

Oh, wait, I see what you are suggesting - make all of the out-of-time stubs but do the proper PreMixing in-time. Messy, but possible. Probably blows the PreMix event size.

@kpedro88
Copy link
Contributor

I see, so the overlap at the SimHit level is important... that may require a custom "premix stub" format that stores more/different information (as we've done with HGCal etc.). Is there specific information that is needed vs. other information that can be discarded? (Even at the expense of some accuracy by making approximations)

@kpedro88
Copy link
Contributor

Would it blow the event size more than storing all the truth particles (which we're currently doing, but hope to disable by default by the end of the year)?

@mdhildreth
Copy link
Contributor

I think the PreMix merging of the in-time hits is adequate as it is. For the out-of-time stubs, we just need counts and a momentum sorting.

@mdhildreth
Copy link
Contributor

Not clear on event size. It would be some fraction of the Outer Tracker occupancy, multiplied by 8.

@mdhildreth
Copy link
Contributor

Before we go down the road of trying to do this in production, I would like to make a hacked-up standalone version and see what the effects really are. We have some estimates from previous studies that the losses are relatively small.

@rekovic
Copy link
Contributor

rekovic commented Jul 28, 2020

+1

@cmsbuild
Copy link
Contributor

This pull request is fully signed and it will be integrated in one of the next master IBs (tests are also fine). This pull request will now be reviewed by the release team before it's merged. @silviodonato, @dpiparo, @qliphy (and backports should be raised in the release meeting by the corresponding L2)

@silviodonato
Copy link
Contributor

+1
@tomalin there is a comment that still need to be addressed/implemented:
#30648 (comment)

@cmsbuild cmsbuild merged commit 9fdf2e6 into cms-sw:master Jul 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants