Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check divu #66

Merged
merged 66 commits into from
Mar 31, 2022
Merged

Check divu #66

merged 66 commits into from
Mar 31, 2022

Conversation

nickwimer
Copy link
Collaborator

This branch implements the pressure rise due to enclosed mass injection by summing up velocity flux on all boundary faces and incorporates the correction in adjustPandDivU.

nickwimer and others added 30 commits December 2, 2021 13:39
…on; running, but something isnt quite right...
…on; running, but something isnt quite right...
…o check_divu

merging development updates into check_divu
* Remove managed arg. from ProbParm

* Fix SunMem init.

* Add a 3D input file to FlameSheet.

* Add timer on SDC k update portion.

* Remove mask from inst. RR calculation. EB are handled using box flag and
isCovered.

* Remove Managed arg from RegTests.

* Remove mamged arg. from TripleFlame case.
* Fix Efield BCfill with PMF changes.

* Update 1D efield flame source code.
* Enable initializing data from a pltfile, possibly a PeleLM one as
long as the mass fractions are stored in there.

* Add Extern/amrdata to the list of AMReX packages.
* Setup Tprof regions in advance/oneSDC to get clearer profiling data.

* Pass MAGMA flag to TPL and add CUDA ptxas optim. off flag.
* Add a Multi-Component diffusionOp to class.

* add ncomp to diffusionOp constructor, default to 1

* getMCDiffusionOp function. if ncomp != m_ncomp, reset the operator.

* Reset mcdiffusionOp during regrid.

* Enable DiffusionOp::diffuse_scalar, DiffusionOp::computeDiffFluxes and DiffusionOp::computeDiffLap
with multiple components.

* Switch to getMCDiffusionOp when dealing with species.

* NewStateRedist was removed from AMReX-Hydro
* Overhaul level data structure, introducing state MF in place of separate MF
for velocity, density, species, ...
Fix a bug on the averageDownState function that previously did nothing ...

* Restore TurnInflow. Change the process slightly where s_ext now contains
the turbulence data coming into bcnormal such that the user can decide to overwrite
locally the turbulence if needed.

* Set level data I_R to zero if we are not doing divu iters or it could
get junk.
* Rmove call to MemHelper initialize.

* Add USE_SUNDIALS as preamble to Make.PeleLMeX.

* Fix change in the way umac growth faces are filled.

* Update submodules.
* Update the convected Species/Temperature gaussian test inputs.

* Add missing mean flow direction in Convected Gaussian.
* Add parameter controling the gaussian width based on a diffusion process.

* Add parsing.

* Update initalization to properly setup the diffusion problem.

* Small change to pprocConvOrder.py to aneble hacking the reference solution.

* Add an input file to generate the reference analytical solution.

* Fix input.2d_DiffGauS for convergence testing.
* Fix parsing tagging input for box.

* Couple of fixes for turbinflow for error introduced while re-arranging the
level data.

* Fix TurbInflow case to match changes in turbinflow velocity field handling.

* Add a input file with refinement on loZ.
@nickwimer nickwimer closed this Mar 24, 2022
@nickwimer nickwimer reopened this Mar 24, 2022
@nickwimer nickwimer marked this pull request as ready for review March 24, 2022 17:35
@nickwimer nickwimer requested a review from esclapez March 24, 2022 17:35
@esclapez esclapez mentioned this pull request Mar 29, 2022
20 tasks
addRhoYFluxes(GetArrOfConstPtrs(fluxes[0]),geom[0]);
}
// Compute face domain integral for U
if (m_sdcIter == m_nSDCmax) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shoudn't the Umac divergence be done at every SDC step?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, good point. Changing...

Comment on lines 306 to 314

Vector<MultiFab> dummy(finest_level+1);
for (int lev = 0; lev <= finest_level; ++lev) {
dummy[lev].define(grids[lev], dmap[lev], 1, 0, MFInfo(), *m_factory[lev]);
dummy[lev].setVal(1.0);
}

m_uncoveredVol = MFSum(GetVecOfConstPtrs(dummy),0);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

m_uncoveredVol should already be computed when you come here no ?


// subtract \tilde{theta} * Sbar / Thetabar from divu
for (int lev = 0; lev <= finest_level; ++lev) {
#ifdef AMREX_USE_OMP
#pragma omp parallel if (Gpu::notInLaunchRegion())
#endif

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This empty line might mess with #pragma omp parallel if

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleting...

Comment on lines 31 to 34
for (int idim = 0; idim < AMREX_SPACEDIM; idim++) {
m_domainUmacFlux[2*idim] = 0.0;
m_domainUmacFlux[2*idim+1] = 0.0;
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be done regardless of the m_do_temporal status right ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pulling out and placing in PeleLMAdvance

@esclapez esclapez merged commit 375e861 into development Mar 31, 2022
@esclapez esclapez deleted the check_divu branch March 31, 2022 18:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants