Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Heavy Ions MVA track selection + pt dependent pixel track selection #9032

Merged
merged 7 commits into from May 17, 2015

Conversation

istaslis
Copy link
Contributor

Addition of MVA track selection for heavy ions.
As agreed with @VinInn this was done by implementing a separate version of the multi-track selector in RecoHI/HiTracking.
We should remerge with the pp version in the future when the pp version allows a more flexible variable selection (which is a planned future development).
The MVA track selection is disabled by default, as the calibrations are not yet available in the GT.
The calibrations are uploaded and the GT has been requested, so we should be enable this feature shortly, which we propose to do by a separate PR (this can be discussed at the ORP tomorrow).

This PR also included the pT dependent pixel track selection cuts implemented in the now closed PR #9017, as agreed with @VinInn

@cmsbuild
Copy link
Contributor

A new Pull Request was created by @istaslis (Stas Lisniak) for CMSSW_7_5_X.

Hi mv atrack selector75 x rebase

It involves the following packages:

RecoHI/HiTracking

@cmsbuild, @cvuosalo, @nclopezo, @slava77 can you please review it and eventually sign? Thanks.
@echapon, @jazzitup, @dgulhan, @appeltel, @yenjie, @kurtejung, @mandrenguyen, @richard-cms, @yetkinyilmaz this is something you requested to watch as well.
You can sign-off by replying to this message having '+1' in the first line of your reply.
You can reject by replying to this message having '-1' in the first line of your reply.
If you are a L2 or a release manager you can ask for tests by saying 'please test' in the first line of a comment.
@nclopezo you are the release manager for this.
You can merge this pull request by typing 'merge' in the first line of your comment.

@cmsbuild
Copy link
Contributor

-1
Tested at: d916c06
you can see the results of the tests here:
https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-9032/20/summary.html

The following merge commits were also included on top of IB + this PR after doing git cms-merge-topic:
0c17f04
bf6ed20
You can see more details here:
https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-9032/20/git-log-recent-commits
https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-9032/20/git-merge-result

@appeltel
Copy link
Contributor

I don't believe that the test error is related to the PR - I saw similar
DAS Errors today performing "runTheMatrix.py -s" on unrelated branches, and
these wen't away after retesting.

-Eric

On Mon, May 11, 2015 at 2:39 PM, cmsbuild notifications@github.com wrote:

-1
Tested at: d916c06
d916c06
you can see the results of the tests here:

https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-9032/20/summary.html

The following merge commits were also included on top of IB + this PR
after doing git cms-merge-topic:
0c17f04
0c17f04
bf6ed20
bf6ed20
You can see more details here:

https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-9032/20/git-log-recent-commits

https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-9032/20/git-merge-result


Reply to this email directly or view it on GitHub
#9032 (comment).

@mandrenguyen
Copy link
Contributor

@cmsbuild please test
Let's see if I have the rights to trigger a retest

@slava77
Copy link
Contributor

slava77 commented May 12, 2015

@cmsbuild please test

@cmsbuild
Copy link
Contributor

The tests are being triggered in jenkins.

@slava77
Copy link
Contributor

slava77 commented May 16, 2015

... I'm checking the PR

I looked at the code, comparing with the original MultiTrackSelector.cc and didn't find anything particularly out of order.
Now to check runtime.

From jenkins shorter run in data (only 140.53 is in the matrix):

  • there is an increase in low-pt tracks
    all_oldvsnew_runhi2011wf140p53c_log10recotracks_higeneraltracks__rereco_obj_p
  • initial step yield decreases, while lowPtTriplet is up by ~30% and detachedTriplet up by about 50%
    all_oldvsnew_runhi2011wf140p53c_log10recotracks_higeneraltracks__rereco_obj_pt
    all_oldvsnew_runhi2011wf140p53c_recotracks_higeneraltracks__rereco_obj_algo
  • major increase (x4 in 8-9 hit tracks)
    all_oldvsnew_runhi2011wf140p53c_recotracks_higeneraltracks__rereco_obj_found

Since this is data, no idea if it's fakes or real tracks

@mandrenguyen
Copy link
Contributor

@slava77
In the initial step of HI tracking all reconstructed tracks were kept (prior to this PR), for backwards compatibility.
As we added further iterations we decided to use the 'loose' bit to define what is kept in the output collection (keepAllTracks = False).
With the new MVA track selection we now apply a uniform loose selection to all steps, which removes tracks which are considered unrecoverable in central events for any reasonable track selection. The effect on the efficiency was checked to be on the order of 1%.
The net effect on the total output size is rather small because, as you saw, there are less tracks in the initial step, and more from the subsequent steps.
The tracks which are added will actually increase the efficiency by a fair amount once the MVA selection is activated.

@slava77
Copy link
Contributor

slava77 commented May 16, 2015

based on wflow 140.0 (minbias PbPb events)

high-purity selections show increase in efficiency:
wf140_hp_algo
wf140_hp_eff_vs_pt
fake rate about doubles
wf140_hp_fake_vs_eta
wf140_hp_fake_vs_pt

for loose tracks: the efficiency and the fake rate is about the same, for a noticeable change in the algo values
wf140_reco_algo
cf to all hiGeneralTracks
all_sign550histatvsorig-9000histat_hydjetqminbiaswf140p0c_recotracks_higeneraltracks__reco_obj_algo

@slava77
Copy link
Contributor

slava77 commented May 16, 2015

Technical performance numbers base on 100 events in 140.53:

  • event sizes increase starting from more tracks which contribute to objects downstream
    • 9% more hiGeneralTracks
    • 5.4% more muons
    • 6% more particleFlow candidated
    • 6 to 12% more akVs PF jets
    • 2.3% increase in RECO size
    • 2.5% increase in AOD size
  • CPU consumption is up globally by about 14%, driven primarily by increase in particleFlowEGamma and voronoiBackgroundPF. In the order of execution, the increase is starting mainly from the track merging step (smaller increases/decreases appear in the following (not show, the iterative steps times are down by ~10% in pixel pair and low-pt triplet steps without noticeable increases in other steps).
        hiGeneralTracks          559.082 ms/ev -> 640.416 ms/ev
        pfTrack          46.43 ms/ev -> 64.173 ms/ev
        pfTrackElec      916.089 ms/ev -> 1111.54 ms/ev
        particleFlowBlock        9107.72 ms/ev -> 9915.89 ms/ev
        particleFlowEGamma       10588.7 ms/ev -> 14557 ms/ev
        particleFlowTmp          1615.37 ms/ev -> 2163.43 ms/ev
        voronoiBackgroundPF      8097.48 ms/ev -> 12685.9 ms/ev
 Total times (all reco step modules) : 71511.4 ms/ev -> 81320.1 ms/ev     delta: 9808.68

@slava77
Copy link
Contributor

slava77 commented May 16, 2015

+1

for #9032 d916c06

  • tested locally in CMSSW_7_5_0_pre4 /test area sign550/, cherry-picked
  • changes appear only in HI workflows as expected
  • changes in HI workflows observed in 140.53 and 140.0 appear to be as expected (although the increase in CPU is probably putting a more significant squeeze on HI program)

@cmsbuild
Copy link
Contributor

This pull request is fully signed and it will be integrated in one of the next CMSSW_7_5_X IBs unless changes (tests are also fine). This pull request requires discussion in the ORP meeting before it's merged. @davidlange6, @nclopezo, @smuzaffar

@davidlange6
Copy link
Contributor

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants