You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The inaccuracy only concerns the event building. There, it is mainly relevant when running in cosmic mode (much longer acquisition windows, so much higher chances for BCID overflow mismatch).
Up to now, the following heuristics was used for BCID overflow correction:
Within the same chip, the BCID count must always increase during a spill/cycle/acquisition window. Thus whenever the BCID value drops, we know that the clock must have overflown, and we add (n_memory_cycles *) the overflow threshold (2^12). Not all overflows will be caught, but it is the best that can be done with the information we obtain from the chip.
The merging of chips into detector-level events in build_events.py is performed on these build-corrected BCID values. We loose the (true) coincidences between chips if:
One of the chips detected the overflow (e.g. a noisy cell previously passed the threshold with BCID 3917, the coincidence event has BCID 412).
Another chip detects the coincidence event at 412, but had no noise written to its memory SCA previously during the acquisition window.
The chip-level events with BCIDs 5008 and 412 will not be merged.
A simple solution for avoiding this is: use the BCID without overflow correction for chip-merging into events. I propose to assign the highest BCID value amongst the chips to the event (the one with the most n_memory_cycles). I implemented and pushed the new procedure with commit bd670e2.
Note:
In cosmic data taking mode the acquisition window is rather long. With the universe as our particle accelerator, the event rates are low. On top, there is no fixed event direction. A chip is by no means guaranteed to observe hits before every memory overflow. Here I expect to find the largest changes/improvements from this commit.
In testbeam data taking mode, the acquisition clock goes up to 5000, but overflows at 4096. In most cases where there is an event after 4096, noise or a previous event on the same chip will give us
With a clock optimized for the experiment's running conditions, this should never be an issue.
I believe that the new procedure is strictly better than the previous one. The only regression that I can see: Previously, the BCID was ensured to be always smaller or equal to the true BCID. Now, it is possible that a chip's BCID is mis-merged with a chip from the wrong memory_cycle. As the merged event obtains the highest BCID, the first chip's BCID is now overestimated.
It would be cool to have someone more knowledgable than me confirm my logic/code here. But it can wait until after the testbeam.
The text was updated successfully, but these errors were encountered:
Of course, it was an early-stage commissioning run. So these numbers should not be over-interpreted. I aesthetically preferred the smoothness of the old bcid_first_sca_full. But I think that we were under-counting the memory cycles there.
The inaccuracy only concerns the event building. There, it is mainly relevant when running in cosmic mode (much longer acquisition windows, so much higher chances for BCID overflow mismatch).
Up to now, the following heuristics was used for BCID overflow correction:
Within the same chip, the BCID count must always increase during a spill/cycle/acquisition window. Thus whenever the BCID value drops, we know that the clock must have overflown, and we add (
n_memory_cycles *
) the overflow threshold (2^12). Not all overflows will be caught, but it is the best that can be done with the information we obtain from the chip.The merging of chips into detector-level events in build_events.py is performed on these build-corrected BCID values. We loose the (true) coincidences between chips if:
The chip-level events with BCIDs 5008 and 412 will not be merged.
A simple solution for avoiding this is: use the BCID without overflow correction for chip-merging into events. I propose to assign the highest BCID value amongst the chips to the event (the one with the most
n_memory_cycles
). I implemented and pushed the new procedure with commit bd670e2.Note:
I believe that the new procedure is strictly better than the previous one. The only regression that I can see: Previously, the BCID was ensured to be always smaller or equal to the true BCID. Now, it is possible that a chip's BCID is mis-merged with a chip from the wrong memory_cycle. As the merged event obtains the highest BCID, the first chip's BCID is now overestimated.
It would be cool to have someone more knowledgable than me confirm my logic/code here. But it can wait until after the testbeam.
The text was updated successfully, but these errors were encountered: