Skip to content
donnaaboise edited this page Feb 8, 2012 · 15 revisions

Welcome to the AMR Working group

Here are some of the topics that I think we can discuss in this group. Some possible answers have been edited to reflect the latest status.

  • How AMR-ready is PyClaw? I have seen in previous e-mails that there are at least variable names like "number o grids", "levels", etc, which says the developers are thinking ahead to AMR.

    • The current petsc-based parallelization assumes that each processor has one grid, which means that the number if processers has to be compatible with the domain, and that the number of grids may not change dynamically.

    • For dynamic/tree-based AMR that frequently changes the number and partition of grids we need to allow multiple grids per process (including zero). This affects the PyClaw infrastructure in many places in terms of adding the flexibility to handle multiple grids per process.

  • Computational efficiency, vs. ease of development of patch-based approaches vs. an octree approach. Patch based approaches have the obvious advantage that large uniform Cartesian patches can be sent off to existing single grid solvers. The disadvantage is that gridding may be less efficient (i.e. too many cells refined) than an octree approach which can more easily refine in exactly those areas that need refinement.

    • Using a parallel tree structure and interpreting each leaf as a compute grid combines efficient ijk numerics provided by PyClaw with efficient dynamic partitioning ("regridding") implemented by the tree. In this model the tree determines the number of grids local to each process and their indexing sequence, and the connectivity of grids at their boundaries.
  • To subcycle in time or not to subcycle in time? This is going to be important if large refinement ratios are expected.

    • To implement subcycling the iteration over grids needs to be grouped by level. This information can be extracted from the grids once and reused if necessary. The cost of computation per grid is proportional to the number of time steps it needs to reach a final time and can be used to inform the load balancing weights. The causality between grids of different levels may force some processes to idle is a lot harder to model and quantify. This may be a risk to parallel efficiency.
  • What is hard to get right is the elliptic and parabolic solvers. This maybe where we want to spend some time, noting the strengths and weaknesss of the various appraoches, especially getting good parallel performance.

  • How flexible should the solver be? Should a general code handle both cell centers and node based schemes?

  • What about mapped grids?

  • What about GPU computing? How much of a game-changer is that the advent of GPU arrays going to be for AMR?

    • In the leaf=grid concept the grid operations can benefit from GPU acceleration independent of the AMR organization. Intergrid/boundary projections on the GPU are less obvious to realize but seem realistic.

If you'd like to see what existing AMR codes look like, click here. See also the download location and paper on p4est

Advertisement for joining our group : AMR Working Group

Sunday 2/5 Progress

Our working group on Sunday was very productive. Here is what we discussed, and concluded

  • The hybrid patch/octree approaches is very appealing, and is the model we adopting. The idea is that we use an octree to organize a "master grid", which is used to delineate leaves of refinement tree. But instead of each leaf being a single mesh cell, the leaves are grids of a fixed size that can be passed off to a clawpack/pyclaw solver. One key advantage of this approach is the we can benefit from the more structured layout of the patches, and an ordering scheme which makes it easy to locate patches in the larger domain. This may avoid the need for the irregular data structures needed for the original patch scheme.

  • The question that came up is, Do we need to worry about covered cells? Since we no longer worry about an irregular grid structure that would result by placing grids arbitrarily in a larger domain, we may have no more need for covered cells. Fine grid boundary ghost cell data may be interpolated from a combination of valid fine and coarse grid data.

  • One concern was raised that by always requiring a factor two refinement (larger domain is refined into a 2x2 grid (each containing, say a clawpack grid). This means that effectively, only refinement ratios of two are allowed.

  • It is decided that sub cycling in time is important, and so will be implemented in the p4est version of PyAMR

  • Kristof already has a version of Peano working with PyClaw.

  • Carsten will work on getting a single grid working with pest.

  • Donna will work look into ghost cell issue, and will come up with the basic pseudo-code for advancing a single time step on a coarse grid, and recursively, updating finer grids.

Monday 2/6 Progress

The major discussion was about who gets to control the looping over patches. There are two main philosophies.

  • In one approach, the idea is that there should be as little modification to the PyClaw code as possible, and that the solver.evolve_in_time() should simply be a front-end to C++ code that then handles all the data management, and makes callbacks to code that handles the boundary conditions, and the update on patches. This means that the C++ code is also handling all the decisions about when to regrid, and how to do subcyling in time.

  • In the second approach, PyClaw iterates over patches, with the help of Iterator classes that can loop over patches at a particular level. In this way the time step control remains with PyClaw, and the underlying C++ code only manages the patch framework. In this approach, individual patches can make requests to get their boundary condition information from the underlying data manager.

There is leaning towards the second approach, because it gives more control to PyClaw in the places where numerics issues are not always well understood. The users should be able to retain as much control over the looping process, and intervene when necessary. Boundary condition stencils will be dependent on the desired accuracy of the underlying solver, as will the updating scheme.

Tuesday 2/7 Progress


  • Group members now have working copies of pest, which turned out to be quite easy to install and compile. The only error encountered was with architecture - the latest Mac OS seems to want to compile 64-bit code, whereas Python is happiest with 32 both code. Compiling pest with the -m32 flag seems fixed architecture run-time errors with the dynamically loaded library.

  • The essential pieces of p4est are now exposed to PyClaw, and so in theory, the missing link is now PyClaw development for AMR. In fact, a good part of the day was spent discussing the PyClaw differences between "Grids" and "Patches" and the view that PyClaw takes on managing these objects. Because the details of this are not completely worked out in PyClaw, it made it a bit difficult to make progress on the PyClaw side of things. Kyle, however, has taken on the challenge, and is working to implement looping over grids in many of the PyClaw solvers.

  • We Skyped with Marsha Berger to answer the boundary condition question that we had earlier (see Sunday). She didn't think there would be any problem with not computing on a covered coarse grid. That means we can use coarse/fine values for stencils when computing the fine grid ghost cell values.

Wednesday 2/8 Progress

There was a good amount of hacking on Monday night and Tuesday. The p4est interface is on github at carstenburstedde/p4estclaw. In the p4est approach, all numerical information resides in the solver (in this case PyClaw) and is organized following the p4est meta information.

  • Carsten has completed a Python/C interface between PyClaw and p4est which works on all four laptops tested. Kyle and others have worked on extending PyClaw to multiple grids. The intergrid boundaries are now informed by Python-wrapped p4est information on what each grid's neighbor indices are. Todos:

    • Complete PyClaw to handle a uniformly refined tree in serial.
    • Extend PyClaw to handle 2:1 size conditions on neighbor grids.
    • Extend p4est interface to refine/coarsen based on 1 boolean per grid determined by PyClaw.
    • Extend PyClaw to compute per-grid refine/coarsen flags and pass them to p4est.
    • Extend PyClaw to interpolate/project fields between the old and new grids after refine/coarsen.
    • Extend p4est interface to expose communication pattern for off-processor neighbor grids.
    • Extend PyClaw to execute the MPI communication of numerical values.
    • Currently we're handling face neigbors, but no edge/corner neighbors.
    • Make use of existing dimension-independent code to support both 2D/3D.