-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory reduction most visible for HI processing, done mostly by dropping muon isodeposits in HI #12764
memory reduction most visible for HI processing, done mostly by dropping muon isodeposits in HI #12764
Conversation
A new Pull Request was created by @slava77 (Slava Krutelyov) for CMSSW_7_5_X. It involves the following packages: DQM/SiPixelMonitorTrack @cvuosalo, @cmsbuild, @deguio, @slava77, @vanbesien, @davidlange6 can you please review it and eventually sign? Thanks. Following commands in first line of a comment are recognized
|
@cmsbuild please test |
The tests are being triggered in jenkins. |
-1 Tested at: e66f643 ---> test runtestRecoEgammaPhotonIdentification had ERRORS you can see the results of the tests here: |
this unit test failure is also in the IB. |
+1 for #12764 e66f643
|
…Peaks memory reduction most visible for HI processing, done mostly by dropping muon isodeposits in HI
memory reduction most visible in HI processing, done mostly by dropping muon isodeposits in HI (same as #12764)
originally, as a follow up to cmsRun1-68 from
https://hypernews.cern.ch/HyperNews/CMS/get/recoDevelopment/1405.html
(I)
using the heavier events from the report and igprof -mp, the muon isodeposits were found to contribute up to 400 MB/event in memory (at least an order of magnitude less is expected for more normal HI events and much less for pp events).
It turns out that the isodeposits put in the event are not used anywhere in HI workflows.
So, they are dropped.
This has the largest "long-term" impact (from the time muon id module runs to the end of the event)
(II)
In addition, modules with fairly large memory churn (MEM_TOTAL) were cleaned somewhat trivially, without changing the algorithm logic.
The net effect on the large events was about 15-20% reduction in allocations in stream::EDProducer and stream::EDAnalyzer modules.
There are only short-term (less than the module ::produce time) contributions.
The optimizations here also speed up substantially the modified modules.
Overall, for the job that failed at T0: the 8-thread configuration memory peak reduced from 15.67 GiB to 14.88 GiB.
tested in CMSSW_7_5_7_patch2: no differences were observed in monitored quantities