Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pileup and calibration DA for PHOS and CPV #5230

Merged
merged 2 commits into from Jan 22, 2021
Merged

Conversation

peressounko
Copy link
Collaborator

  • fixed issue with same bunch pileup in PHOS and CPV

  • adds option raw->cluster reconstruction for CPV

  • adds calibration algorithms for calibration for PHOS and CPV and skeleton for relative E calibration in PHOS

  • fix argument range in calibration objects

  • PHOS mis-alignment matrices added

@shahor02
Copy link
Collaborator

Hi @peressounko

The CI-s are failing with:

+ value='[20:10:24][ERROR] Alignable entry CPV/Module3 NOT set'
+ '[' -n '[20:10:24][ERROR] Alignable entry CPV/Module3 NOT set' ']'

Would be better to not mix so many unrelated commits in a single PR...

Cheers,
Ruben

@sawenzel
Copy link
Collaborator

sawenzel commented Jan 20, 2021

I confirm that the in-bunch pileup test now passes for PHS/CPV digits. But I would also prefer one feature = one PR. The geometry fixes probably have nothing to do with pileup. Just as a wish for future developments.

Comment on lines 126 to 128

~PHOSCalibCollector() final = default;

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dear @shahor02 ,
what code checked suggests here? Initial version with ~PHOSCalibCollector() = default; checker criticized and suggested adding final/override. This version also causes warning. I think, this should be final class and I do not want to write descructor explicitely. What should I write here? Thanks, Dmitri

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@peressounko the LLVM produces a warning (which we interpret as an error) if the class having a virtual destructor and not flagged as final declares its destructor as final. Use override = default instead.

@peressounko
Copy link
Collaborator Author

Dear @sawenzel @shahor02, initially I prepared PR with calibration algorithms. But then I added also a fix of pileup issue which looks more urgent. Can we go with this PR (when clear code checker warnings) or should I split it into two? Best regards, Dmitri

@shahor02
Copy link
Collaborator

@peressounko fine for me to merge once the CIs are passed, but would be good if you could squash commits, grouping them logically consistent way (and getting rid of fixups like clanged).

@peressounko
Copy link
Collaborator Author

build/O2/o2-dataflow has errors in QC checks, probably not related to this PR

Copy link
Collaborator

@shahor02 shahor02 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @peressounko,

Please see some comments below, the ClassDefs for DPL tasks should be removed.
I did not test the code, but as far as I understood, @sawenzel is happy with the pile-up performance, so after fixing this for me it is fine to be merged.

But I have general question about the calibration devices (if needed, can be reimplemented later): I see that you typically create some histos in the init, fill them through all TFs, then send the calibrated object to CDB-populator. In this way the granularity of your objects will be run, fill or even few of them, depending on how often the workflow started / stopped.
Is this what you want? We have user-defined time-window calibration framework, you copy-pasted some TOF code from @chiarazampolli , why not relying fully on this framework, which gives you control on the coverage of the calib. objects?

Cheers,
Ruben

std::string mPath{"./"}; ///< path and name of file with collected histograms
std::unique_ptr<BadChannelMap> mBadMap; /// Final calibration object
std::array<char, o2::cpv::Geometry::kNCHANNELS> mMapDiff; /// difference between new and old map
ClassDefNV(CPVBadMapCalibDevice, 1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this, there is no point creating a dictionary for DPL Tasks (and if there was, since the Task is a virtual class, the ClassDefOverride(..) should have been used).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

std::unique_ptr<CalibParams> mCalibParams; /// Final calibration object
std::unique_ptr<TH2F> mMean; /// Mean values in High Gain channels
std::array<short, o2::cpv::Geometry::kNCHANNELS> mGainRatio; /// Gain variation wrt previous map
ClassDefNV(CPVGainCalibDevice, 1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment on ClassDefNV for Task

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

std::unique_ptr<CalibParams> mCalibObject; /// Final calibration object
std::unique_ptr<TH2F> mhRatio; /// Histogram with ratios
std::array<float, o2::phos::Mapping::NCHANNELS> mRatioDiff; /// Ratio variation wrt previous map
ClassDefNV(PHOSHGLGRatioCalibDevice, 1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idem

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

std::unique_ptr<TH2F> mRMSLG; /// RMS of values in Low Gain channels
std::array<short, o2::phos::Mapping::NCHANNELS> mPedHGDiff; /// Pedestal variation wrt previous map
std::array<short, o2::phos::Mapping::NCHANNELS> mPedLGDiff; /// Pedestal variation wrt previous map
ClassDefNV(PHOSPedestalCalibDevice, 1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idem

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

Comment on lines +41 to +45
TFile filein(filename.data(), "READ");
if (filein.IsOpen()) {
mMean->Add(static_cast<TH2F*>(filein.Get("Gains")));
filein.Close();
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are reading the file into the histo, filling the latter and save it again, is this in order to accumulate over multiple runs? In this way you don't have a control over what you get from the disk, if such an accumulation is needed, it should be done either via CCDB, or you via filling containers which are then merged and processed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this calibration may require scan of several runs. These are transient data with relatively large size, probably CCDB is not good place to store them. Files with histos can be used as "containers" and merged if necessary with root tools. I plan to provide some configuration with paths to files etc. Or we have this functionality implemented in framework?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concerning CCDB, I meant to put there not transient histos but to incrementally cumulate the calibration, if the algorithm allows this. How are you going to control the period over which the integration goes?
Also, do you really need to update the CCDB in the end of the workflow, if in the next session you still keep updating this file? Are these objects used for the sync. workflow?
Eventually, the containers will be handled by the framework. Anyway, the changes can be itnroduced later.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For gain-like calibrations there is no much sense in updating with intermediate statistics, so only final objects should be sent. I plan to check mean statistics per channel and finalize calibration once threshold is set. This will probably require manual intervention, but this should be rare procedure ~ 1 time per year. Normally CCDB will not be updated per processing at EOS, only temporary objects will be stored to files. Only when statistics will be reached calibration will be send to CCDB. Output to QC will be send on each end of workflow and experts will be able to look at status.
For PHOS some versions of gain will be used in reconstruction but final calibration will be applied on analysis level. For CPV gain calibration will be necessary for clustering. But it should be produced on commissioning stage.

Comment on lines +158 to +174
// //TODO:
// //Get current map
// int nChanged=0;
// for(short i=o2::cpv::Geometry::kNCHANNELS; --i;){
// short dp=mPedestals.getPedestal(i)-oldPed.getPedestal(i);
// mPedDiff[i]=dp ;
// if(abs(dp)>1){ //not a fluctuation
// nChanged++;
// }
// }
// if(nChanged>kMinorChange){ //serious change, do not update CCDB automatically, use "force" option to overwrite
// mUpdateCCDB=false;
// }
// else{
// mUpdateCCDB=true;
// }
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are those commented methods to be refined later? If not, better to suppress the dead code.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I will implement them together with sending/reading to CCDB.

Comment on lines 224 to 228
// //============================================================
// DataProcessorSpec getTOFChannelCalibDeviceSpec(bool useCCDB){
// using device = o2::calibration::TOFChannelCalibDevice;
// using clbUtils = o2::calibration::Utils;

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is just commented TOF code, I guess can be eliminated

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dear @shahor02, @chiarazampolli, we will have two kinds of calibrations: pedestals (CPV) and pedestals, HG/LG ratio (PHOS) should process data from dedicated runs. Full run, usually relatively short ~few minutes should be processed. Another kind of calibrations: gains (CPV), time and gain (PHOS) will require several runs and even fills. In some discussions before we agreed that transient data should be stored on disk of DPL computers and processed when sufficient statistics will be collected. I tried to use corresponding lifetimes of devices. Do I miss something?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We were speaking about the calibration info which is on the EPN accumulated (merged) in the containers, processed offline then deleted. These EPN containers, since there will be 250 of them, will need to be aggregated, then the calibration can created.
In case of your calib. devices, I am not sure: since you are using raw data, I guess they will be running on the FLP, since as far as understand, you will run the cell fits on FLPs and will not ship the raw data to EPN. I am not sure there we can accumulate data there (was never discussed), so, we may need to produce on FLP some intermediate data from raw, and send to EPN for accumulation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calibrations like Pedestals and HG/LG ratio do use raw data. Their output histos (in addition to CCDB objects) will be used for bad map and should be shipped to EPN. These are light-weight devices and should not consume a lot of CPU. Even if they will interfere with data taking, I think this is not a problem if part of pedestal or LED runs will be lost. Another option: split device to standard raw decoding part on FLP and analysis part on EPN.
PHOS time and gain device will use clusters and reside on EPN. CPV gain device can use raw data as now or digits if we decide to move it to EPN. We will discuss this with CPV experts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants