Skip to content

Conversation

@tomeichlersmith
Copy link
Member

@tomeichlersmith tomeichlersmith commented Oct 14, 2025

@EinarElen has anything changed since you gave that presentation? I'm unsure if my comments about the Kaon EoT estimate being unresolved is still correct.

Feel free to correct my notes if you find an issue in them.

@EinarElen
Copy link
Contributor

EinarElen commented Oct 14, 2025

Is there a missing mu factor in the poisson part?

I think I would change it so it doesn't sound like this is something specific to the kaon biasing. I think that the issue is that the 1/Nresample step weight estimate probably isn't unbiased but that it is relatively close to being it.

I have however been trying to think a bit more about the whole thing and I think i have an idea for how to deal with the statistics. I think my validation by computing a biasing-ratio is sound, so the overestimate from the * N for the kaon/mixed case is real.

I'm going to make the slightly strange definition to say that the ideal event weights should be such that EoT = * N, or if the weights are estimates then taking K samples of size N and treating the EoT value from each as monte carlo sample, i.e. $E[\hat{EoT}] \rightarrow EoT$.

The weights we are using today is essentially a stepwise product of the relative step probability w.r.t. default Geant4. So for event i with L steps, $w_i = \prod_j^L p(step_j)^{ref} / p(step_j)^{bias}$.

The first part that was a bit trippy to realize is that $p(step_j)^{bias}$ with this method is 1. Because we run until we get a matching topology, the probability that we get a matching topology is 100%. So then we just need to find p(step_j)^ref. For a photon with some energy doing a photonuclear interaction with a nucleus, there is a probability q for any call to Bertini that we get an event with a matching topology. Clearly, $p(step_j)^{ref}$ should be that probability. So our current claim is that 1/Nresample is an unbiased estimator of q, i.e. E[1/Nresample] = q.

The mistake we are making is that we aren't accounting for the fact that we unconditionally stop at the first find. Computing E[1/Nresample] isn't obvious but if we instead define R = Nresample, it is straight-forward to see that it has a geometric distribution in q. $p(R; q) = (1-q)^{R-1} q$, i.e. one factor forn the R-1 times where we failed with probability 1-q and one time q to succeed at this step. It has E[R] = 1/q. Knowing that E[R] is geometric in q then we know that

$$ E[1/R] = \frac{q\ln(\frac{1}{q})}{1-q} $$

For small probability, this goes towards $E[1/R] = q \ln(\frac{1}{q}) = -q\ln(q) $

So that isn't equal to q. So our estimate is wrong :(

But if we can estimate q, then we should be able to compute the correct weight values. Easiest approach is probably to compute and store a grid and interpolate at runtime.

Beware: The probability that i have inverted at least one fraction somewhere here is 1. But I think the conclusion is right.

Copy link
Contributor

@EinarElen EinarElen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See earlier comment b/c i made the comment in the wrong place, but i think getting the missing mu (if im not mistaken) + rephrase the kaon-specific part?

@tomeichlersmith tomeichlersmith merged commit f224027 into trunk Oct 15, 2025
@tomeichlersmith tomeichlersmith deleted the 25-notes-on-eot branch October 15, 2025 14:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants