-
Notifications
You must be signed in to change notification settings - Fork 37
Open
Labels
questionFurther information is requestedFurther information is requested
Description
Non-deterministic behavior in peak splitting we once observed on OSG (with Microarch >= "x86_64-v3" already applied) has been a huge problem. So far in the examples we have, we see that max_goodness_of_split is not deterministic even for the very same peaklets, with uncertainty at <1E-3. That means, the peaklets with timestamp close to splitting threshold on different machine might be split in different ways.
In terms of the most scary part, I refer to here. It is wrapped by @numba.njit(nogil=True, cache=True), and I am not sure what performance might be trickily machine-dependent here.
Some quick thoughts:
- Of course, the best we can do is to understand all the risk numba introduced in a bottom up way there.
- If we cannot figure out, another thing we can try is to round
max_goodness_of_spliton purpose to 1% precision, which will eat up all the hypothetical machine-induced fluctuation. Assume the machine-induced fluctuation is indeed <0.1%, a 1% rounding will not change physics in any significant level, and will bring ~deterministic robustness to us.
yuema137
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested