Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BCID overflow correction #29

Closed
kunathj opened this issue Mar 18, 2022 · 2 comments
Closed

BCID overflow correction #29

kunathj opened this issue Mar 18, 2022 · 2 comments

Comments

@kunathj
Copy link

kunathj commented Mar 18, 2022

The inaccuracy only concerns the event building. There, it is mainly relevant when running in cosmic mode (much longer acquisition windows, so much higher chances for BCID overflow mismatch).

Up to now, the following heuristics was used for BCID overflow correction:
Within the same chip, the BCID count must always increase during a spill/cycle/acquisition window. Thus whenever the BCID value drops, we know that the clock must have overflown, and we add (n_memory_cycles *) the overflow threshold (2^12). Not all overflows will be caught, but it is the best that can be done with the information we obtain from the chip.

The merging of chips into detector-level events in build_events.py is performed on these build-corrected BCID values. We loose the (true) coincidences between chips if:

  • One of the chips detected the overflow (e.g. a noisy cell previously passed the threshold with BCID 3917, the coincidence event has BCID 412).
  • Another chip detects the coincidence event at 412, but had no noise written to its memory SCA previously during the acquisition window.

The chip-level events with BCIDs 5008 and 412 will not be merged.
A simple solution for avoiding this is: use the BCID without overflow correction for chip-merging into events. I propose to assign the highest BCID value amongst the chips to the event (the one with the most n_memory_cycles). I implemented and pushed the new procedure with commit bd670e2.

Note:

  1. In cosmic data taking mode the acquisition window is rather long. With the universe as our particle accelerator, the event rates are low. On top, there is no fixed event direction. A chip is by no means guaranteed to observe hits before every memory overflow. Here I expect to find the largest changes/improvements from this commit.
  2. In testbeam data taking mode, the acquisition clock goes up to 5000, but overflows at 4096. In most cases where there is an event after 4096, noise or a previous event on the same chip will give us
  3. With a clock optimized for the experiment's running conditions, this should never be an issue.

I believe that the new procedure is strictly better than the previous one. The only regression that I can see: Previously, the BCID was ensured to be always smaller or equal to the true BCID. Now, it is possible that a chip's BCID is mis-merged with a chip from the wrong memory_cycle. As the merged event obtains the highest BCID, the first chip's BCID is now overestimated.

It would be cool to have someone more knowledgable than me confirm my logic/code here. But it can wait until after the testbeam.

kunathj added a commit that referenced this issue Mar 18, 2022
Longer description on why this is necessary (and if it does the
right thing now) in the corresponding GitHub issue:
#29
@kunathj
Copy link
Author

kunathj commented Mar 18, 2022

Some numbers from the 03112022 commissioning run:

BCID merging procedure in eventbuilding before bd670e2
>= 4 coincidences, hits 1.517.025 1.726.453
>= 4 coincidences, events 163.973 176.556
>= 10 coincidences, hits 216.407 283.740
>= 10 coincidences, events 13.915 17.540
before bd670e2
eea556162c1d4545bfcefe77e13b12cf 4118ce1d7cb04ccc958c3d4ed1570ef6
178bde6bf35f477a9fbca534bcdc4b1e 30a9c178d1c9469e85079f084048c6f1
37f7bb612eda4c9ebc776e8e3d07ced7 bfa323c0bdb845489825734cc88896e4

Of course, it was an early-stage commissioning run. So these numbers should not be over-interpreted. I aesthetically preferred the smoothness of the old bcid_first_sca_full. But I think that we were under-counting the memory cycles there.

@kunathj
Copy link
Author

kunathj commented Apr 18, 2022

Should be solved with 141dc46

@kunathj kunathj closed this as completed Apr 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant