New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Beamspot DIP server (normal DQM client) for 12_0_X #35193
Conversation
A new Pull Request was created by @sikler (Ferenc Siklér) for CMSSW_12_0_X. It involves the following packages:
@andrius-k, @kmaeshima, @ErnestaP, @ahmad3213, @cmsbuild, @jfernan2, @rvenditti can you please review it and eventually sign? Thanks. cms-bot commands are listed here |
Thanks @sikler |
@jfernan2 Sounds interesting. Is there some documentation on FakeBeam and related (twiki, presentation, similar)? |
For FakeBeamMonitor, in this same package, |
BTW: @sikler
|
DQM/Integration/python/clients/beamspotdip_dqm_sourceclient-live_cfg.py
Outdated
Show resolved
Hide resolved
Hi @sikler ! About the FakeBeamMonitor there is not documentation at the moment, it's a simple copy of the BeamMonitor except the BeamSpot values don't come from the fit to the events but are random numbers generated by TRandom. It was used to commission the per-lumi updates of the BeamSpot during LS2 when collisions were not available. I will have a look at this PR asap (next few days) and see if there is a way to use it for testing DIP. |
Pull request #35193 was updated. @andrius-k, @kmaeshima, @ErnestaP, @ahmad3213, @cmsbuild, @jfernan2, @rvenditti can you please check and sign again. |
startTime = bs.GetCreationTime(); | ||
startTimeStamp = bs.GetCreationTime(); // not provided by BeamSpotOnlineObject | ||
endTime = bs.GetCreationTime(); | ||
endTimeStamp = bs.GetCreationTime(); // not provided by BeamSpotOnlineObject |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the difference between starTime
and startTimeStamp
? (and same for endTime
).
Maybe they can be added to the BeamSpotOnlineObject.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
startTime is an std::string, while startTimeStamp is epoch (number of seconds since 1970).
In principle startTime should contain the beginning of the processed lumisection range, and endTime should mark the end of the processed lumisection range.
Can you add those? Actually epoch could be calculated from CreationTime through mktime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function GetCreationTime()
for the BeamSpotOnlineObject returns the moment in which the payload was actually created (see these lines), it's not related to the LS analyzed.
I'll look into adding these parameters to the online beamspot.
DCSStatus = cms.untracked.string("scalersRawToDigi"), | ||
# | ||
sourceFile = cms.untracked.string( | ||
"/nfshome0/dqmpro/BeamMonitorDQM/BeamFitResults.txt"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok as I mentioned above you are reading the txt file that is filled by the "HLT workflow" (beamhlt_dqm_sourceclient-live_cfg.py#L106), while the legacy workflow (beam_dqm_sourceclient-live_cfg.py#L96) is writing to
/nfshome0/dqmpro/BeamMonitorDQM/BeamFitResultsOld.txt
We should decide which one we want to publish, or at least add an option to make this configurable.
For example for the Pilot Beam Test in October I think only the legacy workflow (which is fed by ZeroBias) will produce actual BeamSpot values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @francescobrivio this comes a bit out of context for me. What did we use to publish before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately I don't know what was published in the past, @sikler do you know?
I will try to dig up the old DIP results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have implemented two possible workflows for DIP publishing:
- read the BeamSpotOnlineObject and DCSStatus, and transmit their data (this is the default)
- and the old way of reading the /nfs files (BeamFitResults.txt and BeamFitResults_TkStatus.txt), can be selected by ('readFromNFS = True')
So, the default now is BeamSpotOnlineObject and DCSStatus ('readFromNFS = False')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @sikler
What is the reason or advantage of the operation mode reading the data from the condition DB (the BeamSpotOnlineObject)? Why it has been chosen as the default choice?
Also, taking into account that we have two workflows ("Legacy" and "HLT") populating two different tags, which one are you going to select?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @ggovi
What is the reason or advantage of the operation mode reading the data from the condition DB (the BeamSpotOnlineObject)? Why it has been chosen as the default choice?
It was felt that reading /nfs files was not a good practice, so a full online solution with BeamSpotOnlineObject was the way to go.
Also, taking into account that we have two workflows ("Legacy" and "HLT") populating two different tags, which one are you going to select?
My understanding was that there will be some switch at the creation of BeamSpotOnlineObject, and that will decide with which beamspot would be used for online and also to be transmitted to LHC's DIP. So the beamspot used by online should match the DIP version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sikler thanks for the answer. The fragility of nfs filesystem sounds indeed like a good reason for preferring the DB as input. I'm actually wondering if this problem holds as well also for the BeamSpot online workflows... Because in this case we would be affected anyhow. @francescobrivio may clarify.
In any case, this choice has a cost - see the variables added in the payload, which will be presumably not consumed by HLT, but only "bridged out" for the DIP storage.
Concerning the choice of the two tags, I think the actual implementation has been evolved with respect to the earlier discussion, and now we have actually the two payloads (in the two different tags), with no "DB-stored" information about which one has been selected/consumed by hlt. @francescobrivio may comment also about this...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
@ggovi the BeamSpot online workflows do not rely on
nfs
, they write directly the payload and upload it to CondDB.nfs
is only used to store the txt files so I don't think there is a problem there. -
@sikler regarding the choice of the two workflow it's like Giacomo wrote: both payloads are produced and uploaded to the DB in different tags. Then it's up to the consumer code to choose the preferred one. At the moment there are 2 consumers: the HLT and the DQM monitoring.
Both rely on an ESProducer (OnlineBeamSpotESProducer.cc) that reads both tags and takes the final decision on what BeamSpot to use, since they are based on the same producer the decision taken is the same for both (this has been commissioned and verified during MWGR and CRUZET).
You can see an example of the implementation in the DQM monitoring client onlinebeammonitor_dqm_sourceclient-live_cfg.pySo in the end we have to decide if what we publish in DIP is what we actually use in HLT (and in Express), or if we want to publish always the same result in any case (e.g. always the legacy BeamSpot).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@francescobrivio well if the workflows write files outside the local node disk, they could be still affected by nfs issues. In this case, it would be unreasonable to address the issue only on the DIP workflow side. Maybe we should discuss the overall picture. I'll try to setup a discussion.
DQM/Integration/python/clients/beamspotdip_dqm_sourceclient-live_cfg.py
Outdated
Show resolved
Hide resolved
DQM/Integration/python/clients/beamspotdip_dqm_sourceclient-live_cfg.py
Outdated
Show resolved
Hide resolved
@jfernan2 @sikler about testing this PR in P5: @sikler The only part I'm not sure about is if we can set the output of the DIP pubblication? i.e. we don't want to publish this fake values anywhere public, we just want to test that the pubblication (and the full chain) actually works. |
As I wrote above, there are two options:
Sounds viable.
Yes, we can set to anything by marking the publication as 'test', so people won't take the numbers seriously. |
Pull request #35193 was updated. @emanueleusai, @ahmad3213, @cmsbuild, @jfernan2, @pmandrik, @pbo0, @rvenditti can you please check and sign again. |
2 similar comments
Pull request #35193 was updated. @emanueleusai, @ahmad3213, @cmsbuild, @jfernan2, @pmandrik, @pbo0, @rvenditti can you please check and sign again. |
Pull request #35193 was updated. @emanueleusai, @ahmad3213, @cmsbuild, @jfernan2, @pmandrik, @pbo0, @rvenditti can you please check and sign again. |
[DD4hep] Increase precision of rotation matching in making DD4hep big XML file for DB
EDProducer/EDAnalyzer and consumes migration for some DT classes
…mFix LowPtElectrons: fix for UL FastSim mini v2 and nano v9 workflows
Pull request #35193 was updated. @SiewYan, @gouskos, @lveldere, @bbilin, @wajidalikhan, @slava77, @ianna, @Saptaparna, @Martin-Grunewald, @rekovic, @alberto-sanchez, @santocch, @cecilecaillol, @perrotta, @civanch, @yuanchao, @makortel, @ahmad3213, @cmsbuild, @missirol, @agrohsje, @fwyzard, @GurpreetSinghChahal, @smorovic, @davidlange6, @smuzaffar, @Dr15Jones, @jpata, @cvuosalo, @emanueleusai, @mdhildreth, @AdrianoDee, @jfernan2, @kskovpen, @sbein, @ggovi, @qliphy, @pmandrik, @fabiocos, @pbo0, @francescobrivio, @malbouis, @ssekmen, @mkirsano, @jordan-martins, @emeschi, @alja, @srimanob, @fgolf, @mariadalfonso, @rvenditti, @tvami can you please check and sign again. |
-1 |
-1 |
a2bba87
to
9240ec1
Compare
Pull request #35193 was updated. @SiewYan, @gouskos, @lveldere, @bbilin, @wajidalikhan, @slava77, @ianna, @Saptaparna, @Martin-Grunewald, @rekovic, @alberto-sanchez, @santocch, @cecilecaillol, @perrotta, @civanch, @yuanchao, @makortel, @ahmad3213, @cmsbuild, @missirol, @agrohsje, @fwyzard, @GurpreetSinghChahal, @smorovic, @davidlange6, @smuzaffar, @Dr15Jones, @jpata, @cvuosalo, @emanueleusai, @mdhildreth, @AdrianoDee, @jfernan2, @kskovpen, @sbein, @ggovi, @qliphy, @pmandrik, @fabiocos, @pbo0, @francescobrivio, @malbouis, @ssekmen, @mkirsano, @jordan-martins, @emeschi, @alja, @srimanob, @fgolf, @mariadalfonso, @rvenditti, @tvami can you please check and sign again. |
-alca
|
-db |
PR description:
PR validation:
PR issues, questions: