Note that the methods discussed in this section are written specifically for continuous-energy mode but equivalent apply to the multi-group mode if the particle's energy is replaced with the particle's group
The tally capability in OpenMC takes a similar philosophy as that employed in the MC21 Monte Carlo code to give maximum flexibility in specifying tallies while still maintaining scalability. Any tally in a Monte Carlo simulation can be written in the following form:
A user can specify one or more filters which identify which regions of phase space should score to a given tally (the limits of integration as shown in equation tally-integral
) as well as the scoring function (f in equation tally-integral
). For example, if the desired tally was the (n, γ) reaction rate in a fuel pin, the filter would specify the cell which contains the fuel pin and the scoring function would be the radiative capture macroscopic cross section. The following quantities can be scored in OpenMC: flux, total reaction rate, scattering reaction rate, neutron production from scattering, higher scattering moments, (n, xn) reaction rates, absorption reaction rate, fission reaction rate, neutron production rate from fission, and surface currents. The following variables can be used as filters: universe, material, cell, birth cell, surface, mesh, pre-collision energy, post-collision energy, polar angle, azimuthal angle, and the cosine of the change-in-angle due to a scattering event.
With filters for pre- and post-collision energy and scoring functions for scattering and fission production, it is possible to use OpenMC to generate cross sections with user-defined group structures. These multigroup cross sections can subsequently be used in deterministic solvers such as coarse mesh finite difference (CMFD) diffusion.
Some Monte Carlo codes suffer severe performance penalties when tallying a large number of quantities. Care must be taken to ensure that a tally system scales well with the total number of tally bins. In OpenMC, a mapping technique is used that allows for a fast determination of what tally/bin combinations need to be scored to a given particle's phase space coordinates. For each discrete filter variable, a list is stored that contains the tally/bin combinations that could be scored to for each value of the filter variable. If a particle is in cell n, the mapping would identify what tally/bin combinations specify cell n for the cell filter variable. In this manner, it is not necessary to check the phase space variables against each tally. Note that this technique only applies to discrete filter variables and cannot be applied to energy, angle, or change-in-angle bins. For these filters, it is necessary to perform a binary search on the specified energy grid.
One quantity we may wish to compute during the course of a Monte Carlo simulation is the flux or a reaction rate integrated over a finite volume. The volume may be a particular cell, a collection of cells, or the entire geometry. There are various methods by which we can estimate reaction rates
The analog estimator is the simplest type of estimator for reaction rates. The basic idea is that we simply count the number of actual reactions that take place and use that as our estimate for the reaction rate. This can be written mathematically as
where Rx is the reaction rate for reaction x, i denotes an index for each event, A is the set of all events resulting in reaction x, and W is the total starting weight of the particles, and wi is the pre-collision weight of the particle as it enters event i. One should note that equation analog-estimator
is volume-integrated so if we want a volume-averaged quantity, we need to divided by the volume of the region of integration. If survival biasing is employed, the analog estimator cannot be used for any reactions with zero neutrons in the exit channel.
While the analog estimator is conceptually very simple and easy to implement, it can suffer higher variance due to the fact low probability events will not occur often enough to get good statistics if they are being tallied. Thus, it is desirable to use a different estimator that allows us to score to the tally more often. One such estimator is the collision estimator. Instead of tallying a reaction only when it happens, the idea is to make a contribution to the tally at every collision.
We can start by writing a formula for the collision estimate of the flux. Since R = Σtϕ where R is the total reaction rate, Σt is the total macroscopic cross section, and ϕ is the scalar flux, it stands to reason that we can estimate the flux by taking an estimate of the total reaction rate and dividing it by the total macroscopic cross section. This gives us the following formula:
where W is again the total starting weight of the particles, C is the set of all events resulting in a collision with a nucleus, and Σt(E) is the total macroscopic cross section of the target material at the incoming energy of the particle Ei.
If we multiply both sides of equation collision-estimator-flux
by the macroscopic cross section for some reaction x, then we get the collision estimate for the reaction rate for that reaction:
where Σx(Ei) is the macroscopic cross section for reaction x at the incoming energy of the particle Ei. In comparison to equation analog-estimator
, we see that the collision estimate will result in a tally with a larger number of events that score to it with smaller contributions (since we have multiplied it by Σx/Σt).
One other method we can use to increase the number of events that scores to tallies is to use an estimator the scores contributions to a tally at every track for the particle rather than every collision. This is known as a track-length estimator, sometimes also called a path-length estimator. We first start with an expression for the volume integrated flux, which can be written as
Vϕ = ∫dr∫dE∫dΩ∫dt ψ(r, Ω̂, E, t)
where V is the volume, ψ is the angular flux, r is the position of the particle, Ω̂ is the direction of the particle, E is the energy of the particle, and t is the time. By noting that ψ(r, Ω̂, E, t) = vn(r, Ω̂, E, t) where n is the angular neutron density, we can rewrite equation flux-integrated
as
Vϕ = ∫dr∫dE∫dtv∫dΩ n(r, Ω̂, E, t)).
Using the relations N(r, E, t) = ∫dΩn(r, Ω̂, E, t) and dℓ = v dt where dℓ is the differential unit of track length, we then obtain
Vϕ = ∫dr∫dE∫dℓN(r, E, t).
Equation track-length-integral
indicates that we can use the length of a particle's trajectory as an estimate for the flux, i.e. the track-length estimator of the volume-integrated flux would be
where T is the set of all the particle's trajectories within the desired volume and ℓi is the length of the i-th trajectory. In the same vein as equation collision-estimator
, the track-length estimate of a reaction rate is found by multiplying equation track-length-flux
by a macroscopic reaction cross section:
One important fact to take into consideration is that the use of a track-length estimator precludes us from using any filter that requires knowledge of the particle's state following a collision because by definition, it will not have had a collision at every event. Thus, for tallies with outgoing-energy filters (which require the post-collision energy), scattering change-in-angle filters, or for tallies of scattering moments (which require the scattering cosine of the change-in-angle), we must use an analog estimator.
As was discussed briefly in methods_introduction
, any given result from a Monte Carlo calculation, colloquially known as a "tally", represents an estimate of the mean of some random variable of interest. This random variable typically corresponds to some physical quantity like a reaction rate, a net current across some surface, or the neutron flux in a region. Given that all tallies are produced by a stochastic process, there is an associated uncertainty with each value reported. It is important to understand how the uncertainty is calculated and what it tells us about our results. To that end, we will introduce a number of theorems and results from statistics that should shed some light on the interpretation of uncertainties.
The law of large numbers is an important statistical result that tells us that the average value of the result a large number of repeated experiments should be close to the expected value. Let X1, X2, …, Xn be an infinite sequence of independent, identically-distributed random variables with expected values E(X1) = E(X2) = μ. One form of the law of large numbers states that the sample mean
The central limit theorem (CLT) is perhaps the most well-known and ubiquitous statistical theorem that has far-reaching implications across many disciplines. The CLT is similar to the law of large numbers in that it tells us the limiting behavior of the sample mean. Whereas the law of large numbers tells us only that the value of the sample mean will converge to the expected value of the distribution, the CLT says that the distribution of the sample mean will converge to a normal distribution. As we defined before, let X1, X2, …, Xn be an infinite sequence of independent, identically-distributed random variables with expected values E(Xi) = μ and variances Var(Xi) = σ2 < ∞. Note that we don't require that these random variables take on any particular distribution -- they can be normal, log-normal, Weibull, etc. The central limit theorem states that as n → ∞, the random variable
Given independent samples drawn from a random variable, the sample mean is simply an estimate of the average value of the random variable. In a Monte Carlo simulation, the random variable represents physical quantities that we want tallied. If X is the random variable with N observations x1, x2, …, xN, then an unbiased estimator for the population mean is the sample mean, defined as
The variance of a population indicates how spread out different members of the population are. For a Monte Carlo simulation, the variance of a tally is a measure of how precisely we know the tally value, with a lower variance indicating a higher precision. There are a few different estimators for the population variance. One of these is the second central moment of the distribution also known as the biased sample variance:
This estimator is biased because its expected value is actually not equal to the population variance:
where σ2 is the actual population variance. As a result, this estimator should not be used in practice. Instead, one can use Bessel's correction to come up with an unbiased sample variance estimator:
This is the estimator normally used to calculate sample variance. The final form in equation unbiased-variance
is especially suitable for computation since we do not need to store the values at every realization of the random variable as the simulation proceeds. Instead, we can simply keep a running sum and sum of squares of the values at each realization of the random variable and use that to calculate the variance.
The previous sections discussed how to estimate the mean and variance of a random variable using statistics on a finite sample. However, we are generally not interested in the variance of the random variable itself; we are more interested in the variance of the estimated mean. The sample mean is the result of our simulation, and the variance of the sample mean will tell us how confident we should be in our answers.
Fortunately, it is quite easy to estimate the variance of the mean if we are able to estimate the variance of the random variable. We start with the observation that if we have a series of uncorrelated random variables, we can write the variance of their sum as the sum of their variances:
This result is known as the Bienaymé formula. We can use this result to determine a formula for the variance of the sample mean. Assuming that the realizations of our random variable are again identical, independently-distributed samples, then we have that
We can combine this result with equation unbiased-variance
to come up with an unbiased estimator for the variance of the sample mean:
At this point, an important distinction should be made between the estimator for the variance of the population and the estimator for the variance of the mean. As the number of realizations increases, the estimated variance of the population based on equation unbiased-variance
will tend to the true population variance. On the other hand, the estimated variance of the mean will tend to zero as the number of realizations increases. A practical interpretation of this is that the longer you run a simulation, the better you know your results. Therefore, by running a simulation long enough, it is possible to reduce the stochastic uncertainty to arbitrarily low levels.
While the sample variance and standard deviation gives us some idea about the variability of the estimate of the mean of whatever quantities we've tallied, it does not help us interpret how confidence we should be in the results. To quantify the reliability of our estimates, we can use confidence intervals based on the calculated sample variance.
A 1 − α confidence interval for a population parameter is defined as such: if we repeat the same experiment many times and calculate the confidence interval for each experiment, then 1 − α percent of the calculated intervals would encompass the true population parameter. Let x1, x2, …, xN be samples from a set of independent, identically-distributed random variables each with population mean μ and variance σ2. The t-statistic is defined as
where x̄ is the sample mean from equation sample-mean
and s is the standard deviation based on equation unbiased-variance
. If the random variables Xi are normally-distributed, then the t-statistic has a Student's t-distribution with N − 1 degrees of freedom. This implies that
where t1 − α/2, N − 1 is the 1 − α/2 percentile of a t-distribution with N − 1 degrees of freedom. Thus, the 1 − α two sided confidence interval for the sample mean is
One should be cautioned that equation two-sided-ci
only applies if the underlying random variables are normally-distributed. In general, this may not be true for a tally random variable --- the central limit theorem guarantees only that the sample mean is normally distributed, not the underlying random variable. If batching is used, then the underlying random variable, which would then be the averages from each batch, will be normally distributed as long as the conditions of the central limit theorem are met.
Let us now outline the method used to calculate the percentile of the Student's t-distribution. For one or two degrees of freedom, the percentile can be written analytically. For one degree of freedom, the t-distribution becomes a standard Cauchy distribution whose cumulative distribution function is
Thus, inverting the cumulative distribution function, we find the x percentile of the standard Cauchy distribution to be
For two degrees of freedom, the cumulative distribution function is the second-degree polynomial
Solving for x, we find the x percentile to be
For degrees of freedom greater than two, it is not possible to obtain an analytical formula for the inverse of the cumulative distribution function. We must resort to either numerically solving for the inverse or to an approximation. Approximations for percentiles of the t-distribution have been found with high levels of accuracy. OpenMC uses the following approximation:
where zx is the x percentile of the standard normal distribution. In order to determine an arbitrary percentile of the standard normal distribution, we use an unpublished rational approximation. After using the rational approximation, one iteration of Newton's method is applied to improve the estimate of the percentile.
html
References