Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix numerical diffusion in uniformMesh converter #992

Merged
merged 43 commits into from
Dec 14, 2022

Conversation

mgjarrett
Copy link
Contributor

@mgjarrett mgjarrett commented Nov 22, 2022

Description

Mapping anything from a non-uniform assembly to a uniform assembly and then back will cause numerical diffusion of that value. This could be a block parameter (like mgFlux, detailedDpa, etc.) or number densities. This has been observed for neutron flux while using the nonUniformAssemFlags setting on control rods, and for detailedDpa when using the detailedAxialExpansion setting.

This commit deals with the issue of block parameters being mapped back and forth. The number density issue has already been addressed for the nonUniformAssemFlags case, but it seems to still be present for the detailedAxialExpansion case.

The selection of which parameters to map is very much up for discussion. Please review the categories that have been set for REACTOR_PARAMS_TO_MAP and BLOCK_PARAMS_TO_MAP, and the _setParamsToUpdate method for both neutron and gamma transport. Please leave comments on any parameters you think are missing or should be changed. My approach to these selections is summarized below:

  • First and foremost, we want to ensure that we are not mapping any parameters both "into" and "out of" the uniform mesh assembly, because that will cause numerical diffusion.
  • We also want to make sure that when a new parameter value is generated on the uniform mesh by some physics solver, that parameter is mapped back to the non-uniform mesh.
  • We should always map number densities "into" the uniform assembly but never out of it. Other parameters (like power, temperature, etc.) generally don't need to be mapped in unless they are required to set up the input for the physics kernel. It will generally be harmless to include these additional parameters so long as they are not re-mapped back to the non-uniform assembly on the way out.

Checklist

  • This PR has only one purpose or idea.
  • Tests have been added/updated to verify that the new/changed code works.
  • The release notes (location doc/release/0.X.rst) are up-to-date with any bug fixes or new features.
  • The documentation is still up-to-date in the doc folder.
  • The dependencies are still up-to-date in setup.py.

Mapping anything from a non-uniform assembly to a uniform assembly and
then back will cause numerical diffusion of that value. This could be a
block parameter (like mgFlux, detailedDpa, etc.) or number densities.

This commit deals with the issue of block parameters being mapped back
and forth. The number density issue has already been addressed for the
nonUniformAssemFlags case, but it seems to still be present for the
detailedAxialExpansion case.
@mgjarrett mgjarrett added the bug Something is wrong: Highest Priority label Nov 22, 2022
@mgjarrett
Copy link
Contributor Author

Not requesting review yet, just tagging a few interested parties @keckler @Nebbychadnezzar

mgjarrett and others added 20 commits November 21, 2022 18:02
The power isn't used for any real calculations, but it's necessary to
map power onto the uniform mesh reactor because some downstream physics
solvers want to check the energy balance before proceeding. The "energy
balance" is checked by comparing the sum of b.p.power to the power
specified by the settings input.
Extend NeutronicsUniformMeshConverter class for gamma cases and flux
recon cases.
Add a test to verify that reactor params are mapped correctly by global
flux executer when nonUniformAssems are used. Needed to fix a bug to get
reactor params to map.
@mgjarrett mgjarrett marked this pull request as ready for review November 30, 2022 22:48
@mgjarrett
Copy link
Contributor Author

@Nebbychadnezzar can't select you as a reviewer but I wanted to get your eyes on this as well.

Copy link
Member

@keckler keckler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it. I left a handful of comments and suggestions, but I don't see anything blatantly wrong.

After this is merged, though, there is still work to do in our internal plugins to move the application/calculation of cumulative parameters outside of the geometry conversion entry/exit. For instance, I can say for certain that detailedDpaRate is calculated right now when the geometry is still converted. Likely the same for many of the cumulative parameters. So we'll have to hunt all of those down and update them, if necessary.

For cumulative parameters that are being calculated within the framework itself, I would argue it might make sense to update those as part of this PR (if any of them exist).

armi/physics/neutronics/parameters.py Outdated Show resolved Hide resolved
assignInBlueprints = "assign in blueprints"
retainOnReplacement = "retain on replacement"
pinQuantities = "pinQuantities"
fluxQuantities = "fluxQuantities"
multiGroupQuantities = "multi-group quantities"
neutronics = "neutronics"
gamma = "gamma"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some parameters that you've given this tag which, on the face of it, don't look like they are associated with gamma. I'm thinking specifically things like powerNeutron, pdensNeutron, etc.

My guess is that they have the gamma tag because they are calculated in the gamma transport step.

To me, this is a little bit confusing perhaps for someone who might come along later. Possibly a description here, or a change of the category name, might be useful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is confusing. If only neutron transport is run without gamma, then the power and pdens parameters are populated. Once gamma is run, then powerGamma and powerNeutron are populated.

I think they have to stay in this category, so maybe I'll include a comment or update the parameter description.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would love a docstring or description in the /docs/ somewhere that explains to people what to use these two new categories for.

I mean, I see your description above about going into/out of the uniform mesh. But we want people who come to ARMI in a year to be able to understand this, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some docstrings to describe most of the parameter categories in this class.

armi/reactor/blocks.py Outdated Show resolved Hide resolved
armi/reactor/converters/uniformMesh.py Outdated Show resolved Hide resolved
armi/reactor/converters/uniformMesh.py Show resolved Hide resolved
armi/reactor/converters/uniformMesh.py Outdated Show resolved Hide resolved
UniformMeshGeometryConverter._applyCachedParamValues(
destBlock, blockParamNames, cachedParams
)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding of this change is as follows:

  • Before, we were clearing all of the block parameters on the new, converted assembly, while storing some of them temporarily in a "cache". Then, we were mapping the specified parameters to the new assembly, and the cached parameters are then used to put the other parameters back onto the blocks.
  • Now, we just avoid clearing the parameters in the first place, so that the caching of parameters isn't needed for anything. All parameters that are not specified specifically to be mapped are just left as their initial values?

This seems like potentially a source of great speedup. But it makes me wonder, are we losing something by changing that flow? It seems to me that an expensive step, like clearing all the parameters in the first place, must have been done for a good reason. But I cannot see why that reason might have gone away now...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding of the caching is basically the same as yours.

As for whether the caching is necessary, I would break it down into the two directions:

  • Non-uniform source, uniform destination assembly: When the destination assembly is the uniform assembly, it's being created from the source assembly, so it shouldn't have any pre-existing parameters that need to be cached.
  • Uniform source, non-uniform destination: When the destination assembly is the non-uniform assembly, parameter caching would ensure that we don't erase any of the parameters in blockParamNames on the non-uniform assembly by overwriting with null values from the source assembly. But, if we have defined the blockParamNames to be mapped back correctly, then we should only be mapping back parameters that have been set on the uniform assembly. If this setAssemblyStateFromOverlaps is trying to map back parameters that haven't been set, then either the _setParamsToUpdate was not defined correctly or maybe some analysis performed on the uniform reactor failed. _applyCachedParamValues would paper over that error by just retaining whatever value the non-uniform assembly already had for that parameter, but this would create a "silent" bug that would be very difficult to track down (e.g., you have a b.p.power in the database, but you have no way of verifying whether it was mapped back correctly or just retained by caching).

That's my take on it, but it's complicated and I'm definitely curious to hear the opinions of others. I think the short summary is that parameter caching would not be performing any detectable function when everything is working correctly. It would only act when there is a bug in the code or an error during execution, and it would effectively cover up the error without giving any warning or error message.

Copy link
Member

@john-science john-science Dec 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My major concern with preserving the old, caching solution is backwards compatibility for our downstream projects.

But it doesn't appear well enough documented that I'm thrilled with having to support it forever. Michael's above description matches what I see in the code. But the original developer could have saved us, collectively several hours of digging if it had been better documented.

Let's do better!

armi/reactor/converters/uniformMesh.py Show resolved Hide resolved
armi/reactor/converters/uniformMesh.py Show resolved Hide resolved
armi/reactor/converters/uniformMesh.py Outdated Show resolved Hide resolved
@keckler
Copy link
Member

keckler commented Dec 2, 2022

I don't know of any parameter accumulation within the framework itself that needs to be updated. The most obvious cumulative parameter is depletion/burnup, and those are calculated on the non-uniform assembly. Aside from depletion/residence/burnup, fluence, DPA (mentioned above), I can't think of any other obvious cumulative parameters.

I agree with this sentiment. I just scanned through the framework parameters and couldn't find any which are cumulative that would also be mapped in the first place. There are some things in the fuelPerformance package that are cumulative (gasReleaseFraction, bondRemoved, cladWastage, totalCladStrain), but I guess those won't be mapped in the first place because they don't satisfy the category qualifications.

So we're probably good there!

Parameter categories that need to be mapped for gamma might include
multigroup quantities (depending on the gamma source production method
selected). Neutronics needs to avoid mapping back cumulative
parameters--even though they aren't mapped in, they can be present on
the uniform assembly because the blocks were deepcopy'd from the
original non-uniform assembly.
@keckler
Copy link
Member

keckler commented Dec 12, 2022

I pulled this PR into my ARMI app, reran some extensive calculations, and then looked at the calculated DPAs.

I can confirm that this PR resolves the issues I was previously having with DPA possibly decreasing as time increases. Obviously that was non-physical, and these changes resolve that such that we are now getting physically-possible results for DPA.

I can also add another data point to add confidence to this change. The model that I ran is a benchmark of real reactor operations, and I was previously getting anomalously low reactivities in later cycles. I had suspected that this was due to the numerical diffusion of the uniform mesh generator, and it turns out that with this fix, core reactivities line up much more closely than before.

So in my book, the changes in this PR are a resounding success. I approve!

@john-science
Copy link
Member

I pulled this PR into my ARMI app, reran some extensive calculations, and then looked at the calculated DPAs.

So... above you were concerned that "there is still work to do in our internal plugins to move the application/calculation of cumulative parameters outside of the geometry conversion entry/exit".

Are you concerns with that over?

@keckler
Copy link
Member

keckler commented Dec 13, 2022

Are you concerns with that over?

Yes, I am sufficiently convinced after both looking though our internal code and testing this PR that we are in the clear on that concern.

@LMikeH
Copy link
Contributor

LMikeH commented Dec 14, 2022

I'm not sure why, but I'm getting an error in JOYO that I thought I fixed previously. I looked to make sure my previous changes were still there and they are. I don't know if any of you have seen this error before:

File "\albert\home\mhuang\site-packages\nala_main_.py", line 26, in main
code = ArmiCLI().run()
File "\albert\home\mhuang\site-packages\framework\armi\cli_init_.py", line 202, in run
return self.executeCommand(args.command, args.args)
File "\albert\home\mhuang\site-packages\framework\armi\cli_init_.py", line 231, in executeCommand
return cmd.invoke()
File "\albert\home\mhuang\site-packages\framework\armi\cli\run.py", line 33, in invoke
inputCase.run()
File "\albert\home\mhuang\site-packages\framework\armi\cases\case.py", line 367, in run
o.operate()
File "\albert\home\mhuang\site-packages\framework\armi\operators\operatorMPI.py", line 74, in operate
Operator.operate(self)
File "\albert\home\mhuang\site-packages\framework\armi\operators\operator.py", line 320, in operate
self._mainOperate()
File "\albert\home\mhuang\site-packages\framework\armi\operators\operator.py", line 329, in _mainOperate
keepGoing = self._cycleLoop(cycle, startingCycle)
File "\albert\home\mhuang\site-packages\framework\armi\operators\operator.py", line 345, in _cycleLoop
halt = self.interactAllBOC(self.r.p.cycle)
File "\albert\home\mhuang\site-packages\framework\armi\operators\operator.py", line 552, in interactAllBOC
return self._interactAll("BOC", activeInterfaces, cycle)
File "\albert\home\mhuang\site-packages\framework\armi\operators\operator.py", line 423, in _interactAll
halt = halt or interactMethod(*args)
File "\albert\home\mhuang\site-packages\framework\armi\utils\codeTiming.py", line 55, in time_wrapper
return_value = func(*args, **kwargs)
File "\albert\home\mhuang\site-packages\framework\armi\physics\neutronics\latticePhysics\latticePhysicsInterface.py", line 108, in interactBOC
self.updateXSLibrary(cycle)
File "\albert\home\mhuang\site-packages\framework\armi\physics\neutronics\latticePhysics\latticePhysicsInterface.py", line 129, in updateXSLibrary
blockList=representativeBlocks, xsLibrarySuffix=self._getSuffix(cycle)
File "\albert\home\mhuang\site-packages\plugins\terrapower-partisn\terrapower\physics\neutronics\partisn\mc2PartisnInterface.py", line 78, in computeCrossSections
executer.run()
File "\albert\home\mhuang\site-packages\plugins\terrapower-partisn\terrapower\physics\neutronics\partisn\partisnExecuters.py", line 417, in run
output = globalFluxInterface.GlobalFluxExecuter.run(self)
File "\albert\home\mhuang\site-packages\framework\armi\physics\executers.py", line 187, in run
self._performGeometryTransformations()
File "\albert\home\mhuang\site-packages\plugins\terrapower-partisn\terrapower\physics\neutronics\partisn\mc2PartisnExecuters.py", line 88, in _performGeometryTransformations
self._computeFineGroupXS()
File "\albert\home\mhuang\site-packages\plugins\terrapower-partisn\terrapower\physics\neutronics\partisn\mc2PartisnExecuters.py", line 99, in _computeFineGroupXS
blockList=self._xsgm.representativeBlocks.values()
File "\albert\home\mhuang\site-packages\plugins\terrapower-partisn\terrapower\physics\neutronics\partisn\mc2PartisnInterface.py", line 117, in computeFineGroupXs
purgeFP=False,
File "\albert\home\mhuang\site-packages\plugins\terrapower-mc2\terrapower\physics\neutronics\mc2\mc2Interface.py", line 236, in _generateXsLibrary
baseList, xsLibrarySuffix, blockList, writers
File "\albert\home\mhuang\site-packages\framework\armi\physics\neutronics\latticePhysics\latticePhysicsInterface.py", line 300, in generateLatticePhysicsInputs
fromWriter = writer.write()
File "\albert\home\mhuang\site-packages\plugins\terrapower-mc2\terrapower\physics\neutronics\mc2\mc2Writers.py", line 411, in write
self._writeComposition(f)
File "\albert\home\mhuang\site-packages\plugins\terrapower-mc2\terrapower\physics\neutronics\mc2\mc2Writers.py", line 601, in _writeComposition
self._writeSingleComposition(fileObj, compNumber, component)
File "\albert\home\mhuang\site-packages\plugins\terrapower-mc2\terrapower\physics\neutronics\mc2\mc2Writers.py", line 606, in _writeSingleComposition
allNuclides = self._getAllNuclideObjects(component)
File "\albert\home\mhuang\site-packages\framework\armi\physics\neutronics\latticePhysics\latticePhysicsWriter.py", line 187, in _getAllNuclideObjects
nucs, fissProds = self._getAllNuclidesByCategory(component)
File "\albert\home\mhuang\site-packages\framework\armi\physics\neutronics\latticePhysics\latticePhysicsWriter.py", line 204, in _getAllNuclidesByCategory
dfpDensities = self._getDetailedFPDensities()
File "\albert\home\mhuang\site-packages\framework\armi\physics\neutronics\latticePhysics\latticePhysicsWriter.py", line 370, in _getDetailedFPDensities
dfpDensitiesByName = lfpCollection.getNumberDensities(self.block)
AttributeError: 'NoneType' object has no attribute 'getNumberDensities'

@keckler
Copy link
Member

keckler commented Dec 14, 2022

@LMikeH I'll message you offline.

Copy link
Member

@john-science john-science left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a good solution, in that it solves a semi-complicated problem.

The solution is as complicated as the problem. But sometimes there is just no way around that.

As long as we are all convinced by the documentation we've put in place to explain it to future generations, I'm happy.

@mgjarrett mgjarrett merged commit 0d039f2 into terrapower:main Dec 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something is wrong: Highest Priority
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Cumulative parameters as they relate to numerical diffusion during uniform mesh conversion
4 participants