-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues related to FastTimerService and HLT #39756
Comments
A new Issue was created by @silviodonato Silvio Donato. @Dr15Jones, @perrotta, @dpiparo, @rappoccio, @makortel, @smuzaffar can you please review it and eventually sign/assign? Thanks. cms-bot commands are listed here |
assign hlt |
New categories assigned: hlt @missirol,@Martin-Grunewald you have been requested to review this Pull request/Issue and eventually sign? Thanks |
(Noted, I will try to understand this in the next days, unless it is considered important to fix this asap.) FYI: @fwyzard |
The reason is that the I inquired about it a few months ago, and according to @mmusich's answer the solution would be to change the code to read those information from
Unfortunately I never had time to work on the changes :-/ |
@silviodonato keep in mind that as long as the cpu usage is below ~70%, it's almost like running without hyperthreading, so it would make sense to observe a timing roughly a factor 2 (I'd expect 1.8x) faster than on a fully used machine. |
By the way, if anyone else makes the necessary changes, I would suggest to also rename these plots to |
OK, some more information: I've re-run over the first lumisections of run 360459 (the same one of @silviodonato's plot), using similar conditions as what we have online ( Taking If I zoom on the DQM plot we see that for the first 2-3 lumisections the CPU time measured on the HLT farm was indeed I'm looking at the first two lumisections because at the beginning of the run the HLT has buffered some data, while it was loading the application, starting the jobs, and getting the first conditions -- so the whole farm will run at the maximum capacity until the buffer has been drained. Keeping all these effects in mind, I would say that the online measurement is in very good agreement with an online-like measurement done in the same conditions 👍🏻 One last comment is about the CPU time vs real (wall clock) time: of course what actually matters for keeping up with the L1 rate is the latter. The plot for the real time looks similar, just a bit higher: Zooming on the first lumisections shows a similar effect, with a peak for the first lumisection around From my online-like measurement I get So... the measurements done on the online machines reproduce pretty accurately the HLT timing measured online (better than I imagined before making this check). And the comparison between the timing value of |
Thanks a lot @fwyzard ! So
I will keep the issue open for the issue 2) (which is not urgent) |
Yes, but only for runs that start already in stable beams, otherwise there is very little to run.
Correct - and the difference is more significative for "real time" than for "cpu time". |
To fix the empty plots, an attempt is in #39859. |
+hlt |
This issue is fully signed and ready to be closed. |
I just want to report here two problems about FastTimerService and the HLT online DQM .
Here the timing seems wrong of a factor of 2. Our offline measurements showed that the timing should be above 300 ms at high pileup.
@cms-sw/hlt-l2
The text was updated successfully, but these errors were encountered: