New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use shrink_to_fit on the PFRecHit collections #9106
Conversation
A new Pull Request was created by @mark-grimes (Mark Grimes) for CMSSW_6_2_X_SLHC. Use shrink_to_fit on the PFRecHit collections It involves the following packages: RecoParticleFlow/PFClusterProducer @cmsbuild, @cvuosalo, @nclopezo, @slava77 can you please review it and eventually sign? Thanks. |
@mark-grimes Can you also post a plot of RSS usage as you have for VMsize? |
Fair point. But RSS is also the thing that jobs get killed for, so it's useful to keep track of it. |
@fratnikov, don't merge this one yet. I'm testing using edm::RunningAverage backported from 75X to reserve the collection size. I will probably add some more commits to this PR. |
#9135 backports RunningAverage and applies the reserve guess. Since there could be long discussion of the peak/retained trade off of merge_to_fit I thought it was better to have them in separately. |
84744ed
to
0097034
Compare
Rebased because of conflicts introduced by #9135. |
merge |
Use shrink_to_fit on the PFRecHit collections
Uses shrink_to_fit in PFRecHitProducer as suggested by @lgray.
Memory size of the produced collections in SLHC25_patch6 (top 25 only):
Same again after applying this PR and #9096:
Looking at the VmSize there appears to be an improvement. This plot is hard to read because the x-axis is approximately time, and they didn't run at the same speed. Each undulating peak is an event, it rises as control runs through the modules and then drops at the start of the next event. The red (this PR, #9096 and #9084) is a little faster (wouldn't read anything into that) so the peaks are a little ahead of the blue (SLHC25_patch6). The important thing is that if you match the corresponding peaks the red is ~100Mb lower. The vast majority of that is this pull request rather than #9096 or #9084.