-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vlct integrator #62
Vlct integrator #62
Conversation
…ut EnzoMethodVlct's compute method. Decided not to have reconstructor deal with face-centered b-fields and not to reuse the same density and B-fields for primitive and conserved values. Also removed the const keyword from method arguments.
…r commit). Implmented process of saving the face-centered B-fields over the relevant reconstructed primitive. Created the EnzoReconstructorPLM class which implements the piecewise linear reconstruction of primitives.
…ncomplete. (The commit is being made to allow for remote work).
…Alsocommited the partially completed EnzoRiemannHLLE declaration (should have been committed previously). (The code base still does not compile)
Implemented calculation of cell centered E-fields EnzoRiemannHLLE: Identified an error in the iteration
Implemented calculation of edge-centered Electric fields EnzoRiemannHLLE: Fixed bug in iteration limits
…de to the iteration of EnzoRiemannHLLE
…fields using edge-centered E-fields.
Implemented calculation of cell-centered B-field EnzoMethodVlct Implemented addition of flux divergences to update conserved quantities. Also added calls to EnzoConstrainedTransport to update the B-fields.
…to construct a 1D linear MHD wave rotated on an angle to test EnzoMethodVlct in 2D and 3D.
… a pointer that represents a multi-dimensional array and provide simplified access to elements. EnzoMethodVlct will be rewritten to make use of this class. Implemented machinery to actually setup the fields for EnzoInitialLinearWave. Still need to setup values based on array type. The current implementation is probably more complex than it needs to be.
…lementing the Vector Potential, setting up parameters, and setting up initialization within the framework.
initializing EnzoInitialLinearWave. Also updated EnzoMethodVlct's constructor to accept gamma, a fluid's adiabatic index (EnzoProblem was updated accordingly).
…d-centered. Henceforth, face-centered temporary fields in EnzoMethodVlct (and relevant classes) will be treated as though they have no values on the exterior of the grid. EnzoConstrainedTransport: Propogated changes to face-centering to the compute_edge_efield instance method and generalized it to work with 2D and 3D grids. (The rewrite is more transparent than the original implementation, but it's still complex). Rewrote compute_center_efield to make use of EnzoArray. Need to rewrite the update_bfield and compute_center_bfield to be consistent with the change to face-centered temporary fields. EnzoArray: No implementation changes. Just added a note about issues with the operator() method. EnzoEOSIdeal: No implementation changes. Just added a note about wrapping the EnzoComputePressure object in the future. Began propogating this change to EnzoConstrainedTransport.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Next bunch if comments. Will finish tomorrow.
- specific total energy | ||
* - ``"bfield_x"`` | ||
- ``enzo_float`` | ||
- [rw] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- [rw] | |
- [r] |
Cell centered field should probably not be modified directly, doesn't it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see where you are coming from with this comment and I agree with it (in principle).
However, as things stand the user is responsible for making sure the cell-centered B-field is properly initialized. If they don't, the program won't work properly.
I do think that it would probably be good to remove this point of failure. Maybe in the future, we could try to make sure the cell-centered bfield is always computed at the end of the initialization, before the first execution of EnzoMethodMHD::compute
. But I think that may be beyond the scope of this PR. (I'm happy to open an issue on this topic so that we don't forget)
This subsection details the available interpolation methods for | ||
reconstructing the left and right states of the cell-centered | ||
interfaces. Presently, all available methods perform reconstruction | ||
on cell-centered primitive quantites, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to close the loop to one of the comments above: As you talk about primitive quantities here (which is also what I'm familiar with and have seen in many textbooks), the distinction between integrable and reconstructable above seems to be disconnected.
I'm generally in favor of just using one consistent notation in the code.
Domain { | ||
lower = [0.0, 0.0, 0.0]; | ||
upper = [1.0, 1.0, 0.167]; | ||
} | ||
|
||
Mesh { | ||
root_rank = 3; # 3D | ||
root_blocks = [2,2,2]; | ||
root_size = [192,192,6]; # number of cells per axis | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are the individual cells not cubic on purpose?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stupid question: could you clarify what you mean about cubic? Are you talking about individual cells being cubes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I meant cells being cubes in terms of physical dimensions, which is used in every other test (IIRC) but here (1/192 != 0.167/6).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, there's no good reason for this. I can update this if you would like. I'm pretty sure this was arbitrary since it's a 2d problem.
I don't remember exactly what I did here. I imagine I was trying to make z-axis have a length of 1 or something and I wasn't thinking about it right.
# | ||
# These tests draw loose inspiration from Linear Wave tests used by Athena++. | ||
# Specifically, this script: | ||
# 1.) Checks if that L1-norm of the error is appropriate for the inclined |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed during the all a test for the L_inf norm would also be great, but this may also be better suited for the test framework overhaul.
Feel free to resolve this comment and open an issue to keep track of this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I recall, that conversation was in the context of checking flux corrections had more to do with testing flux corrections.
But, I do agree that having L_inf norm checks would probably be good (although, I imagine it would be easier to implement them with axis aligned waves). I'll be sure to add this to the running list of testing improvements (that I'll create an issue for)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Next bunch of comments. More later.
inline void prep_slices_(const CSlice* slices, const intp shape[], | ||
const std::size_t D, CSlice* out_slices) | ||
{ | ||
for (std::size_t i=0; i<D; i++){ | ||
intp start = slices[i].get_start(); | ||
if (start < 0){ | ||
start += shape[i]; // start was negative | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I see this correctly then the slices
not directly modified through slices
but through start
. While technically correct, the const
for the parameter maybe misleading or am I missing sth?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not quite sure what you're talking about here
The slices
argument is not actually modified. What is is being modified is out_slices
// Like arrays in Athena++, the indices are listed in order of increasing | ||
// access speed. Imagine a 3D array with shape {mz,my,mx} array(k,j,i) is | ||
// equivalent to accessing index ((k*my + j)*mx + i) of the pointer | ||
// Dimensions are numbered with increasing indexing speed (dim0, dim1, ...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I usually think of dim0 as the "first" dim (i.e., x
), how about
// Dimensions are numbered with increasing indexing speed (dim0, dim1, ...) | |
// Dimensions are numbered with increasing indexing speed (..., my, mx) |
which would then also match the array def mention two lines above?
/* Hydro & MHD Fields directly related to Advection */ \ | ||
ENTRY( density, SCALAR, FieldCat::conserved, T) \ | ||
ENTRY( velocity, VECTOR, FieldCat::specific, T) \ | ||
ENTRY( total_energy, SCALAR, FieldCat::specific, T) \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also still a little confused about the various notations used (likely related to the original Enzo's use of specific quantities mixed with Athena++ notation of primitive/conserved).
From a physical point of view "total mass" should be specific and conserved (though I'm not quite sure if that matches the intention here).
/* Hydro & MHD Fields directly related to Advection */ \ | ||
ENTRY( density, SCALAR, FieldCat::conserved, T) \ | ||
ENTRY( velocity, VECTOR, FieldCat::specific, T) \ | ||
ENTRY( total_energy, SCALAR, FieldCat::specific, T) \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I think more about it, it may be worth bringing this up during the dev sprint, i.e., decide on one notation throughout the code base and then make everything consistent in a separate PR.
I wanted to address all of the comments on Reconstructable/Integrable/Conserved/Primitive quantities. I've known for a while that this would be confusing, but I felt like I needed to bounce my ideas off of other people (who were familiar with my code). I kind of lost my enthusiasm while waiting for people to review this PR (but I'm finding it again). Apologies for not writing this up sooner; I actually started to write this explanation multiple times over the last month and found it difficult to explain (which isn't a great sign). Please, let me know if anything doesn't make any sense. Additionally, please let me know what your thoughts are about this topic. I do want to discuss this (although I realize any discussion will probably be minimal before merging this in). Main Explanation
You guys are right that the use of Reconstructable/Integrable/Conserved/Primitive quantities is confusing. I hate to give a long answer on this, but I feel like some background helps (especially since my opinion on the matter has changed):
Because some of you are unfamiliar with Enzo's conventions, it tracks the following hydro quantities:
There's a second piece to this. The changes introduced in this PR are the first C++ code that I wrote. Because I was uncomfortable with the language, I really leaned into trying to reuse much of the existing Enzo-E machinery. The main result is that all operations internal to the VLCT integrator are performed on fields (I now view this as a design flaw and it's part of the reason for why I'm switching from tracking Groupings of Fields to maps of CelloArrays in this other branch). A natural consequence of this philosophy was that it made sense to treat the quantities exactly as they are treated outside of the integrator. With that in mind, the classifying quantities as conserved or primitive, doesn't make much sense if we are directly updating the specific total energy and velocity (plus specific total energy doesn't really fit into either category, now does it?) Trying to minimize the necessary changes, I introduced some new vocabulary:
I shifted the codebase so that in most cases where we dealt with quantities in conserved form, we dealt the integrable quantities. Likewise, in places where we dealt with primitive quantities, we now used reconstructed quantities. While making these changes, I noticed that all of the integrable quantities are either:
I keep track of this information in Going Forward
Addressing this point is going to be a slightly longer-term goal (especially if we want to get it merged in by tomorrow night). I'll definitely do what I can in the next day or so, but I think we need to defer some of it to future PRs. @pgrete, I know that at one point you talked about maybe modifying Enzo-E so that it largely tracked conserved and specific quantities. I'm totally fine with that approach and I think that makes cleaning up this terminology straightforward. Alternatively, if you think it would make sense to just track the quantities within the integrator as primitive or conserved, that's probably fine too (although I have some concerns). If we don't do that, and I think some of the ideas here should definitely stay. Semi-Related Comment About EOS class
I've long thought that the equation of state class is in need of a large overhaul. I think it's generally a useful abstraction to have. This was the first thing I designed. It was written around the idea of converting between conserved and primitive quantities (like Athena++). It also keeps track of the quantity floors and enforcing the dual energy formalism (when applicable). Some of the issues stem from:
Once this gets merged in, I would definitely like to have a conversation about improving the EOS class (I have some ideas in mind and would like to bounce them off of other people). EDIT: @pgrete I just saw your new comments now. I think that this may address a couple of them. |
@dcollins4096 which of my comment did you delete and why (rather than "resolving")? PS: for other people seeing this thread, the black triangles in @mabruzzo previous comment can be clicked to expand. I didn't know that and it took some time to figure this out (as it was also not shown in the email notification) so that's why I mention it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is very nice code.
- My biggest request is to abstract the CT reconstruction, so it's swappable at run time, in the same way Reconstruct and Riemann are are, and for the same reasons. I don't think you need to hold up the PR for it, but it will make your life easier in the future.
- The magnetic energy density is not correct in the pressure calculation for rank<3. (I seem to recall that 1d and 2d aren't supported, though)
Other things:
-
It would be very useful to install a Dedner MHD. I believe that it can be done relatively quickly, given the machinery you have built. Don't hold up the PR for this, but the relative reward-to-effort would be high, since Dedner tends to be more stable than CT.
-
I agree that the "Reconstructable" grouping is confusing. I think that keeping separate groups for Conserved and Primitive variables is quite useful as a developer. But everything is reconstructable. (There's a good Balsara paper on that topic.)
-
For the future, I do not see (knock on wood) any major road blocks to getting this ready for AMR other than the scavenging of fluxes and electric fields for the coarse-fine interface.
Grouping *conserved_passive_scalars = cello::field_descr()->groups(); | ||
|
||
// allocate constrained transport object | ||
EnzoConstrainedTransport ct = EnzoConstrainedTransport(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be abstracted to support other CT methods in the same way the reconstruction method and riemann solver are. I've found over the years that being able to change the electric field reconstruction is just as important as the reconstruction on the other quantities, especially when numerical instabilities crop up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's actually my intention to allow for generalization on this. I just didn't include it in this PR because there weren't alternative implementations to generalize with.
In fact, in my development branch (which I hope to submit for merging in the next week or so), I've done some refactoring so we can turn magnetic fields off altogether.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's ok to use with only one instance. For other implementations, I got a lot of mileage out of the variations in the Gardiner&Stone 2005 paper, one is sharper than the other, but unstable, as you'd expect.
p[i] = gm1 * d[i] * | ||
(te[i] - 0.5*vx[i]*vx[i]); | ||
enzo_float ke = 0.5*vx[i]*vx[i]; | ||
enzo_float me_den = mhd ? 0.5*bx[i]*bx[i] : 0.; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Magnetic energy needs by[i]*by[i]+bz[i]*bz[i] for both rank==1 and rank==2 cases
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really? Even if there are no y or z components of the velocity?
Edit: And you're right, the integrator doesn't actually support 1D or 2D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, e.g. a converging flow (say a step function in v_x) with a transverse magnetic field, B_y. The field gets amplified by flux freezing, and provides pressure against the pancake. Or Alfven waves, which have fluctuations in By and vy. To properly treat lower dimensionality with MHD, you need vy and vz, as well, the only thing that goes to zero are gradients in y and z. The cross products in Lorentz and Induction couple the transverse and longitudinal motions. Getting lower dimensionality to work is probably worth your effort as a debug tool at some point, but it's kind of a headache.
// but the outer values have not | ||
stale_depth+=reconstructor->delayed_staling_rate(); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the future: I believe that the only thing necessary to ready this routine for AMR is to insert code HERE to scavenge Electric Field and Fluxes from the surface for flux correction.
Thanks, yes, you did. |
Hi-
I deleted one of my own comments because I thought it would be better as part of the review, rather than just a comment. I’m new to github, sorry about that!
d.
… On Jan 5, 2021, at 3:16 PM, Philipp Grete ***@***.***> wrote:
@dcollins4096 <https://github.com/dcollins4096> which of my comment did you delete and why (rather than "resolving")?
PS: for other people seeing this thread, the black triangles in @mabruzzo <https://github.com/mabruzzo> previous comment can be clicked to expand. I didn't know that and it took some time to figure this out so that why I mention it.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#62 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AG6IHGCF5QKMP6HCBO5YGELSYNXTDANCNFSM4TPOZM7A>.
|
Gotcha, so this was my reply to your comment on the 1D and 2D pressure calculation. |
Oh, odd, I didn’t see the reply. I may have had that window open a while? Sorry about that! The same comment is back again.
… On Jan 5, 2021, at 3:31 PM, Philipp Grete ***@***.***> wrote:
Hi- I deleted one of my own comments because I thought it would be better as part of the review, rather than just a comment. I’m new to github, sorry about that! d.
Gotcha, so this was my reply to your comment on the 1D and 2D pressure calculation.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#62 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AG6IHGE2NDY4M6YAFHCPCDLSYNZJJANCNFSM4TPOZM7A>.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't gotten very deep into the weeds of this code and its performance, but it looks like it's set up for success. I don't see that this implementation has any major obstacles. The abstraction layers make sense, the methods implemented are good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is my last bunch of comments.
First of all a big thank you to @mabruzzo . This is a tremendous piece of work!
Similar to @forrestglines and @dcollins4096 I think that this looks good to get merged in soon (pending the minor updates you're currently working on).
Other pieces we raised during the review are better addressed in separate (smaller) PRs once discussed (potentially among the larger community).
I'll try to add a field loop advection test for Thursday as my own exercise working with the new infrastructure, but this should not hold off this PR.
|
||
//---------------------------------------------------------------------- | ||
|
||
EFlt3DArray EnzoFieldArrayFactory::interior_bfieldi(Grouping &grouping, int dim) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this function could be refactored/renamed/get a more explanatory comment in the future if I understood it's purpose correctly.
It's only ever used once (in enzo_EnzoMethodMHDVlct.cpp
) and there, in practice, not to extract the "interior" but a larger fraction (by temporarily increasing the stale depth of the array factory).
Also, from a typical convention point of view, I expect that "interior" refers to all "active" cells/interfaces, i.e., not the ghost ones, but here the ghost zones and the outermost active interface are removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll think about a better way to describe this
But to explain the name:
- If a cell-centered field has N elements along a given axis, then the a face-centered field must have N-1 elements along that axis or N+1 elements along that axis. The latter case includes are values on the outermost faces of the array. Basically this function returns the arrays excluding those outermost faces.
- I was doing this because the reconstructed fields never include values on those outermost faces. But maybe I can avoid this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, to address this I made the following 2 changes:
- I renamed the method
bfieldi_without_outermost_block_faces
- I added more details in the documentation of the method
I got rid of this method anyway in the new branch of the code
right_val = EnzoEquationOfState::apply_floor(val - half_dv, | ||
prim_floor); | ||
left_val = EnzoEquationOfState::apply_floor(val + half_dv, | ||
prim_floor); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From a physical point of view, wouldn't it make more sense to actually limit the slope so that the reconstructed values are at most at the floor, rather than setting those value to the floor values?
Limiting the slopes would also ensure conservation of the reconstructed variable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might make sense, but I really don't want to touch this before merging in. It seems like the exact sort of thing that we could break if we try to do this quickly before merging it in. I will happily open an issue on the topic.
- made EnzoEquationOfState::apply_floor an inline function - changed the variable name of the ith component of the momentum in enzo_riemann_utils::active_fluxes - Removed some old debugging code from compute_edge_ in enzo_EnzoConstrainedTransport.cpp. I also fixed a typo in the comments documenting the function - labelled all of the methods of EnzoPermutedCoordinates so that they are inline - cleaned up some outdated comments in method_vlct_x_rj2a_N256.in, method_vlct_y_rj2a_N256.in, method_vlct_z_rj2a_N256.in, run_MHD_shock_tube_test.py, and input/vlct/vlct.incl - removed an incorrect comment from input/vlct/run_HD_linear_wave_test.py and from input/vlct/run_MHD_linear_wave_test.py - added a more detailed reference to the book used in input/vlct/dual_energy_shock_tube/generate_shock_soln.py - removed EnzoEOSIdeal::copy_passively_advected_fields_ as it was completely unused. - added a clarification comment to input/vlct/dual_energy_shock_tube/compute_align_size.py - marked all functions in enzo_EnzoRiemannUtils.hpp as inline - fixed a typo in a comment of EnzoMethodMHDVlct
Co-authored-by: Philipp Grete <grete@pa.msu.edu>
-fixed a mistake in the array documentation and added more clarity to a description -added a safety assertion to the increment_outer_indices_ helper function in array_CelloArray.hpp -fixed a potential bug in EnzoInitialCloud.cpp -modified the order of coordinates in EnzoPermutedCoordinates so that it's consistent across methods. Propagated these changes to EnzoPermutedCoordinates and EnzoReconstructorNN -Changed name of EnzoFieldArrayFactory::interior_bfieldi to EnzoFieldArrayFactory::bfieldi_without_outermost_block_faces to reduce confusion
Unfortunately, I kind of ran out of gas before I could address the remaining points, but I think I hit almost all of the small comments most of the small stuff In particular, I'm defering the following to the future
In the morning I'll make sure that we weren't missing anything else too significant. |
…entation of EnzoPermutedCoordinates).
I think each of these would make good follow-up PR's on their own. We should make issues after the meeting today for each of them. |
…estart; moving block->compute_done() call outside brackets
Reopening PR #9 - Now, GitHub's interface just reflects changes unique to this branch. This adds the implementation of the VL + CT (van Leer + Constrained Transport) MHD integrator detailed in Stone & Gardiner (2009).
Brief summary of the code included in this pull request:
EnzoMethodMHDVlct
class.CelloArray
class has been defined to encapsulate the array class and is used to write the bulk of operations in this method more clearly and concisely (and can either wrap around previously allocated data or own its own data).src/Cello/test_CelloArray.cpp
. Although they don't provide complete coverage over all use-cases, they are extensive enough to be useful for preventing regressions in the design. Extending coverage would probably be useful in the future. These unit tests have been correctly added tosrc/Cello/SConscript
and can be built by invoking./build bin/test_CelloArray
from the root directory (like all existing unit tests). They have also been added totest/SConscript
so that the unit test is run with all other unit tests whenmake test
is invoked.EnzoRiemann
,EnzoReconstructor
, andEnzoEquationOfState
. These classes form the basis of a loose framework that can easily (and selectively) be reused (and extended) in other hydrodynamic/MHD methodsEnzoRiemannHLLE
,EnzoRiemannHLLC
,EnzoRiemannHLLD
), nearest neighbor and PLM reconstruction algorithms (EnzoReconstructorNN
,EnzoReconstructorPLM
), and an equation of state for an ideal gas (EnzoEOSIdeal
)EnzoEquationOfState
sub classes are also responsible for applying the dual-energy formalism, if applicable (EnzoEOSIdeal
does in fact support this)EnzoIntegrableUpdate
to allow for reuse in other methods.PUP
methods.EnzoConstrainedTransport
class.EnzoInitialInclinedWave
class initializes inclined linear HD/MHD waves and inclined circularly polarized Alfven waves, which are used for debugging and convergence tests.EnzoInitialShockTube
class initializes axis-aligned shock tube test problems. Currently it initializes the Sod shock tube problem and the shock tube shown in figure 2a of Ryu & Jones (1995).EnzoInitialBCenter
class sets up the cell-centered magnetic fields from previously initialized face-centered magnetic fields OR sets up both the cell-centered and face-centered magnetic fields from expressions specified in the configuration file for components of the magnetic vector potential. It also provides the ability to update the values of a partially initialized"total_energy"
field with the values of the initialized magnetic energy. (This convenience is provided since the cell-centered magnetic field is an average of the face-centered magnetic energy, and therefore may be non-trivial to compute)EnzoInitialCloud
class sets up a 3D spherical cloud embedded in a hydrodynamic/MHD wind.EnzoCenteredFieldRegistry
is a utility class that holds functions that assist with the initialization of theEnzoReconstructor
andEnzoRiemann
subclasses. It houses a list of cell-centered quantities that should be updated as new fields are added that are relevant to either of those classes (e.g. internal energy or cosmic ray energy) get added. It also provides methods useful for initializingEnzoMethodMHDVlct
.EnzoPermutedCoordinates
andEnzoFieldArrayFactory
are utility classes. The former holds functions that assist with writing directional operations mesh-based operations (e.g. reconstruction, flux calculations), that are generalized with respect to direction. The latter houses methods that simplify the construction ofCelloArray
instances that wrap fields.Documentation update
EnzoMethodMHDVlct
method (referred to in the parameter file as "mhd_vlct") and the the new initial conditionsCelloArray
(this was adapted from the proposal circulated on slack).New Tests
6 sets of repeatable tests have been included in the
input/vlct
sub-directory. These tests exist as standalone python scripts print details about whether individual tests have passed and that exit with error codes signaling whether or not the full set of tests have passed. The scripts are:run_MHD_linear_wave_test.py
- this checks the L1 Error norms of several MHD waves after propagating at an oblique angle to the simulation domain. Note that one of these tests fails. In that test, the L1 error norms of an inclined left and right propagating fast magnetosonic wave are compared - the norms of the L1 error vector should be identical, but there is a slight discrepancy (the values are 3.30e-8 & 3.28e-8). While this should be addressed (it probably points to a minor issue in the constrained transport), we don't think this should stop the acceptance of this pull request because all of the other tests pass. When this test fails currently fails, the user is notified of the failure and the fact that the test has never passed; the return code of this script is not affected by failure of this test. Once the pull request is accepted, I plan to open an error about this topic.run_MHD_shock_tube_test.py
- this checks the L1 Error norms of the axis-aligned MHD shocktube test problem taken from figure 2a of Ryu & Jones (1995). The test is repeated for the x, y, and z axes.run_passive_advect_sound_test.py
- this checks the norm of the L1 Error of a passively advected scalar after it has been advected by an axis-aligned sound wave for a full wavelength. The test is repeated for the x, y, and z axes.run_HD_linear_wave_test.py
this checks the L1 Error norms of several HD waves after propagating at an oblique angle to the simulation domain. Magnetic fields are uniformly set to zero while this runsrun_dual_energy_shock_test.py
Runs several variants of the sod shock-tube test in a supersonic reference frame to test the implementation of the dual energy formalism.run_dual_energy_cloud_test.py
Runs a cloud-crushing problem (a cloud embedded in a hot wind) for ~30 cycles (technically it runs for a sixteenth of a "cloud crushing time") and checks the asymmetries that develop. This turns out to be a useful test of the dual energy formalism.These tests don't lend themselves to being implemented in the test suite in its current state (as the test suite currently just uses the built-in functionality of enzo-e to check the final times/cycles of a simulation). Including these tests in the existing current test suite problem may require some rewriting of how tests are counted. To help make the tests easier to adapt to the final testing framework, the majority of them employ a command line program,
tools/l1_error_norm.py
to compute norms of the L1 Error. All of these python scripts are written with python3 because (at least at one point) bugs were present inyt
's enzo-p frontend when using python2 that were not present when using python3. Unfortunately, this introduces some additional incompatibility with the existing test suite since the employed version of SCons is only compatible with python2.A self-contained temporary solution has been provided to make sure that these tests are run even though they are not yet formally part of the testing framework. This solution consists of:
test/run_vlct_test.sh
to run each of the above tests. For each of the tests, the script pipes the stdout and stderr into a temporary file, namedtest/<test-name>.test_log
, and prints whether the test has passed/failed. After running all of the test, the script indicates prints whether any have failed and reflects this in its exit code.build.sh
to call this script when./build.sh test
is executed and"$CELLO_PREC" = "double"
. It is called after the existing test suite have finished running (and after the results for those tests are printed). If any of the vlct tests failed, the exit code ofbuild.sh
is set to 1.build.sh
to delete any temporary files that hold the test outputs when./build.sh clean
is executed..circleci/config.yml
to setup the necessary python environment for these tests. I encountered some difficulties with getting parallel simulations to run, an environment variable is also set to skiprun_dual_energy_shock_test.py
(it was necessary to run this test in parallel so that it completed in a reasonable amount of time). I don't think we are losing much coverage by skipping this test.The file names used to store the test outputs and the way that these test results are counted intentionally differs from the existing framework so that I don't break the existing machinery. Note: If the way these tests have been added to the continuous integration is going to block the approval of this PR, my preference would be to move this part of the request to a separate PR.
Additionally, an input file is provided, in
input/vlct/orszag-tang
, to run the Orszag-Tang Vortex test problem. There is no automated check that the results are correct (that's probably something to be added once an analogue to Enzo's answer testing framework is provided). For convenience, a script is provided that produces two plots from the simulation results that can be compared with two figures in Stone et al (2008).Future Work
In a separate branch, I have begun making additional changes related to this integrator beyond the scope of this pull request (the changes will be submitted in a future pull request). These changes include:
Grouping
s of field names to organizing them in maps ofEnzoArrays
.Other potential future work:
Refresh::add_all_fields
, but I fixed this bug when I merged the solver-dd branch. I'll check on this and edit the PR if it's presently supported)._enzo.hpp
header file, and would therefore allow us to avoid recompiling the entire enzo layer every time a Riemann Solver is tweaked. It also wouldn't carry any runtime penalty since they are all subclasses of theEnzoRiemann
class, anyway.Edit: To Do List for Future Testing Overhaul PR
I wanted to keep track of things recommended to be done in a future a testing overhaul PR. Once this PR is accepted I'll make a new issue so that we don't lose these ideas.
std::pairs
or a mapping, and then passed directly passed to the initializer.