Skip to content

Blocked linear system

Luca Bertagna edited this page Apr 11, 2019 · 13 revisions

issue 198

First step: move to Thyra objects

A first step towards the block linear algebra is to switch to Thyra linear algebra structures, without introducing blocked structures yet. This will change/add some interfaces, mostly in AbstractDiscretization (added) and Application/Workset (changed), plus some internal changes in the evaluators (only gather/scatter ones, in theory). Delaying the introduction of block structures to a second step will allow for a quick(er) merge into master, so that new features can be developed with the new interfaces.

Here's a tentative todo list for the first step. If someone completes them, mark them as done.

  • (a) Add getters in AbstractDiscretization for Thyra vector spaces. These can simply create the VS on the fly, wrapping the corresponding Tpetra_Map.
  • (b) Replace Tpetra/Epetra structures in Workset with Thyra ones. As a half-step, we can simply add Thyra structures,
  • (c) Change Application/ModelEvaluatorT, to work directly on Thyra objects, rather than Tpetra/Epetra. In particular, ModelEvaluatorT can simply forward InArgs contents to the Application.
  • (d) Change gather/scatter evaluators, so that they can work with a Workset storing only Thyra objects.

Steps (b)-(d) can be performed incrementally on the objects stored in Workset. For instance, we can start from the residual evaluation only. All other structures will be left as Tpetra/Epetra.

Note: ModelEvaluator (epetra version) does not inherit from Thyra's ME, which complicates things. There is a Thyra adapter for EpetraExt ME, which would allow to use Thyra for both Epetra and Tpetra builds, but it's not used in Albany. I see two options: (1) make ModelEvaluator inherit from the Thyra adapter to EpetraExt's ME; (2) remove Epetra capabilities altogether. The first option is the most backward compatible. But the fact that Epetra-only builds are not possible makes me wonder why shouldn't we go for (2)...

Update 8/6/18

The thyra refactor is ongoing. So far, all inputs in the Workset (x,xdot,xdotdot,Vx,Vxdot,Vxdotdot,Vp) have been replaced with Thyra structures. The refactor for outputs (f,jac,g,gx,gxdot,gxdotdot,gp,fp,...) is ongoing. When this is completed (i.e., all structures in the workset that were Tpetra/Epetra are converted to thyra), there will be a cleaning phase, where all conversions thyra<->*petra are pushed down to the stack, to the very place where the knowledge of the backend linear algebra package is absolutely necessary (read/write/load/save/import/export).

At that point, the thyra refactor will be pretty much over, and we can start thinking about blocks. This will include a serious thinking about how to perform assembly: by block (1 sweep per block), by dimension of mesh objects (1 sweep per dimension), a hybrid approach?

Summary of block discretization meeting (8/21/18)

Participants:

  • Luca Bertagna
  • Mauro Perego
  • Irina Tezaur
  • Jerry Watkins
  • Dan Ibanez
  • Eric Cyr

Discussion:

  • DofManager: talk to Roger, find out if we can make panzer's adapter-stk independent from disc-fe. If yes, then aim at pulling DofManager into Albany (offer to help).
  • assembly: perform assembly by geometric entity. All n-dimensional assembly goes in the same field manager. This means different blocks on same geometry, are gathered, assembled and scattered together. Also, if you have a 3d-2d coupling, have 1 FM for the 3d stuff, and 1 FM for the bc of the 3d problem AND the 'volume' 2d assembly.
  • even if using block, support the possibility of assembling monolithic linear algebra structures. Note: it's tempting to say 'if you want monolithic, place all dofs in the same block', but that may not be possible if the dofs have different discretizations.
  • equation sets: can we implement 'block problems'? Perhaps have a 'BlockProblem', or 'AggregateProblem', which delegates the construction of the evaluators blocks to its sub-problems...
  • observer: how to save a block solution? Each block to a different discretization? Or blocks share mesh? Ideally, support both.
  • evaluators: must support test BF different from trial BF. Question: should we have a specialized impl for when test=trial? It would be faster.
  • Similar problem for interpolation evaluators: we could fuse all cell/side evaluators in a single evaluator. However, for HGrad, the cell interpolation does not require any metric (while the side interpolation does), so we could have a special case for HGrad (which i 100% of Albany, right now), and make it faster.
  • We should probably make every problem a block problem, possibly with 1 block. However, we should support an input file that has no block information, for backward compatibility (and for easy problems). In case no block info is passed, assume 1 block, and handle it.
  • In case of a single block, we should allow the user to not have teko enabled. This helps to keep dependencies of Albany under control. As a matter of fact, as of today, teko has a required dependency on all the Epetra stack... (see Trilinos#3328)

Updates (11/14/18)

Luca and Irina sat down last week to discuss the next steps for this task. Here's a tentative list of steps that need to be taken in the next few weeks:

  • merge master in thyra_refactor_phase_2, test the branch, then merge into master
  • merge new master in thyra_refactor_dist_param_library, test the branch, then merge to master. This may require some work, since the branch was not yet finished, IIRC. However, this item can be left aside, or carried out in parallel to other items.
  • add missing interfaces for thyra-version objects in the discretizations (e.g., getMapT <=> getVectorSpace). The thyra interfaces can simply wrap the tpetra/epetra object. NOTE: the discretization has a getJacobianGraph method, which cannot be thyra-fied (there's no concept of graph in thyra, which acts at the vector-space level, not the mat-vec level). We could replace this with getJacobianOp, and return an operator rather than a graph (after all, who calls getJacobianGraph is probably using it to create an operator).
  • where possible, remove all occurrences of *petra objects (maps/vectors/matrices) in the app, disc/slvr factories, observers, and other high level albany classes. This will allow to operate at a thyra level in all main albany classes.
  • figure out where the block options have to be handled. In particular, figure out where we interface with teko (probably in the model evaluator, when create_W and its variants are called).
  • modify gather/scatter evaluator to be 'block-aware'. We're going to start allowing only a single block, but the input thyra vectors/operators are now going to be blocked, so the evaluators do need to extract the single block. Something along the line of "get num blocks; for i=1,numblock do { get block i; gather/scatter block i; }". NOTE: the evaluator needs to know also how blocks are divided in the PHX field. E.g: with 2 blocks, for residual scatter, do we have 1 PHX field per block or is the PHX field a vector field, with comp 0 corresponding to block 0 and comp 1 corresponding to block 1? Perhaps we can let the user decide this, and we may start-off with a separate class for the block gather/scatter.

Updates (11/14/18)

Luca and Irina sat down discussing the strategies to introduce blocks after the 'unpetrify_branch' gets merged in master. Here are some points:

  • The concept of a 'BlockDiscretization' seems to be a relatively low hanging fruit. It should inherit from AbstractDiscretization, store N discretizations, and delegate most of the work to them. The vector spaces it stores should be Thira_ProductVectorSpace, where each block is provided by one of the stored discretizations. The only tricky part is the creation of a jacobian op: diagonal blocks can easily be created by the stored discretizations, but the off diagonals require knowledge of both discretizations, and should be created by the BlockDiscretization structure. A caveat: we should start requiring that the all the blocks are on the same mesh, so that we can figure out which dof of block i is connected to a dof of block j. This can later be extended to 3d-2d coupling, where the 2d mesh is a subset (a side) of the 3d one.
  • The gather and scatter should be done with an ad-hoc evaluator, rather than generalizing the existing ones. This will allow more ease during the refactoring process. Later, if we see that the block versions work well and can work also for a 1 block case, we can get rid of the old implementation. But that's not really necessary.
  • We agreed that 'volume' integrals on entities of different dimensions should be in different field managers (FM). This means that a 2d equation is assembled in a different sweep then a 3d one. It seems best (or, at least, easier) to also keep separated the BC of a 3d problem from the 'volume' assembly of a 2d problem.
  • Question: how to handle multiple meshes of the same dimension? How to handle two 3d meshes that have a common interface, or that overlap (e.g., Schwarz)? Should they be in the same FM or in two different ones? I argue it is easier (and probably best) to have two separate FM. However, this means that a FM needs to access discretization information of different meshes. This matters during gather/scatter only. We could at first (and probably always) restrict our attention to the case of matching meshes.
  • At the problem level, we will restrict our attention to 'coupled problems'. For instance, for a Stokes problem, coupled to a thermal problem, we would have a single 'ThermalStokes' problem, that creates evaluators for all blocks, rather than have a Stokes problem for the Stokes evaluators, and a thermal problem for the thermal evaluators. This is not optimal, but we believe it can easily be extended later on. In particular, we can later create a 'CoupledProblem' class, which stores an array of problems. The problems need to have the ability to set the dof names from outside, so that the CoupledProblem class can 'inform', say, the Stokes problem that the field 'Temperature' is not a state to register in the StateManager, and need not to be fetched with a LoadStateField evaluator. The 'CoupledProblem' class can then take care of gathering all the solution fields. The sub-problems will still be in charge of creating the scatter evaluators though, and need to not explicitly state the scalar type of fields (at least those that they foresee be computed by another block). This design seems fairly orthogonal to the discretization and evaluators design, and we will try to implement it in a second stage
Clone this wiki locally