Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hypre AMS Preconditioner #1643

Closed
florian98765 opened this issue Apr 4, 2020 · 20 comments
Closed

Hypre AMS Preconditioner #1643

florian98765 opened this issue Apr 4, 2020 · 20 comments

Comments

@florian98765
Copy link
Contributor

Dear Firedrake Developers,

I am currently using the Hypre AMS Preconditioner within FEniCS. I am wondering what are the missing pieces to make AMS work within Firedrake. As far as I can see to only missing thing is the build_gradient function which calculates the discrete gradient matrix, right?

Can the FEniCS code be adopted, or are there any further complicates which I don't see?

best wishes
Florian

@rckirby
Copy link
Contributor

rckirby commented Apr 5, 2020 via email

@rckirby
Copy link
Contributor

rckirby commented Apr 5, 2020 via email

@florian98765
Copy link
Contributor Author

Sounds perfect. I never used matrix-free "matrices", but I will give it a try.

What about the beta poisson matrix, which could be passed to hypre?
If I understand it correctly, this should simply be sigma*inner(grad(u), grad(v))*dx, where u and v are nodal trial- and test-function and sigma can be zero in some regions.
Right now I do not provide the beta poisson matrix. The solver converges, but it seems to be less stable than the sigma=0 case, where I could use setBetaPoissonMatrix(None). Did you every use this?

One last question: Is it possible to use this code with complex number support?
How can I get this code? Could you try to commit it?

best wishes
Florian

@pefarrell
Copy link
Contributor

I'd like to comment that firedrake also supports excellent geometric multigrid methods for problems in H(div) and H(curl). Using PCPATCH we can implement the Arnold-Falk-Winther scheme, which works for all H(div)/H(curl)-conforming elements of all degrees (in its vertex-patch version). Here's a demo code that solves the H(div) or H(curl) Riesz map. Try running it with python hdivcurl.py --op div:

from firedrake import *

import argparse
from datetime import datetime

parser = argparse.ArgumentParser(add_help=False)
parser.add_argument("--baseN", type=int, default=5)
parser.add_argument("--nref",  type=int, default=1)
parser.add_argument("--k", type=int, default=1)
parser.add_argument("--op", choices=["div", "curl"], required=True)
parser.add_argument("--alpha", type=float, default=100)
parser.add_argument("--construct-dim", type=int, choices=[0, 1], default=0)
args, _ = parser.parse_known_args()

# Set up some basic parameters
N = args.baseN                # base mesh size N x N
nref = args.nref              # number of refinements to make
if args.op == "div":
    op = div
    family = "RT"
elif args.op == "curl":
    op = curl
    family = "N1curl"
else:
    raise ValueError
degree = args.k            # degree to use
alpha  = args.alpha        # weighting

construct_dim = args.construct_dim
if construct_dim == 0:
    if args.op == "div":
        partition_of_unity = False
        richardson_scaling = 1/3
    elif args.op == "curl":
        partition_of_unity = False
        richardson_scaling = 1/2
elif construct_dim == 1:
    if args.op == "div":
        partition_of_unity = False
        richardson_scaling = 1/4
    elif args.op == "curl":
        raise ValueError("This is just Jacobi, and doesn't work")

print("Using family %s of degree %s" % (family, degree))

# When doing a star iteration in parallel, we need to extend
# the halo by one more vertex than we otherwise would
distribution_parameters={"partition": True, "overlap_type": (DistributedMeshOverlapType.VERTEX, 1)}

# Create the base mesh
base = BoxMesh(N, N, N, 2, 2, 2, distribution_parameters=distribution_parameters)

# Make the hierarchy of meshes
mh = MeshHierarchy(base, nref, distribution_parameters=distribution_parameters)
mesh = mh[-1]

V = FunctionSpace(mesh, family, degree)
print("V.dim(): %.2e" % V.dim())

u = Function(V)
v = TestFunction(V)

# Set up the PDE
(x, y, z) = SpatialCoordinate(mesh)
f = as_vector([2*y*z*(1-x**2),
              +2*x*z*(1-y**2),
              +2*x*y*(1-x**2)])
alpha = Constant(alpha)
F = inner(u, v)*dx + alpha*inner(op(u), op(v))*dx - inner(f, v)*dx
bcs = None

print("Solving for alpha = %s" % float(alpha))

afw =  {
               "mat_type": "matfree",
               "snes_type": "ksponly",
               "ksp_type": "cg",
               "ksp_max_it": 100,
               "ksp_rtol": 1.0e-10,
               "ksp_atol": 0.0,
               "ksp_norm_type": "unpreconditioned",
               "ksp_monitor_true_residual": None,
               "pc_type": "mg",
               "pc_mg_type": "full",
               "mg_levels_ksp_type": "richardson",
               "mg_levels_ksp_norm_type": "unpreconditioned",
               "mg_levels_ksp_monitor_true_residual": None,
               "mg_levels_ksp_richardson_scale": richardson_scaling,
               "mg_levels_ksp_max_it": 1,
               "mg_levels_ksp_convergence_test": "skip",
               "mg_levels_pc_type": "python",
               "mg_levels_pc_python_type": "firedrake.PatchPC",
               "mg_levels_patch_pc_patch_save_operators": True,
               "mg_levels_patch_pc_patch_partition_of_unity": partition_of_unity,
               "mg_levels_patch_pc_patch_construct_type": "star",
               "mg_levels_patch_pc_patch_construct_dim": construct_dim,
               "mg_levels_patch_pc_patch_sub_mat_type": "seqdense",
               "mg_levels_patch_pc_patch_dense_inverse": True,
               "mg_levels_patch_sub_ksp_type": "preonly",
               "mg_levels_patch_sub_pc_type": "lu",
               "mg_coarse_pc_type": "python",
               "mg_coarse_pc_python_type": "firedrake.AssembledPC",
               "mg_coarse_assembled_pc_type": "lu",
               "mg_coarse_assembled_pc_factor_mat_solver_type": "mumps",
       }

params = afw

# Create the solver. Equivalent to
problem = NonlinearVariationalProblem(F, u, bcs=bcs)
solver = NonlinearVariationalSolver(
    problem, solver_parameters=params, options_prefix="")

start = datetime.now()
solver.solve()
end = datetime.now()

linear_its = solver.snes.getLinearSolveIterations()
nonlinear_its = solver.snes.getIterationNumber()
time = (end-start).total_seconds() / 60
print(GREEN % ("Time taken: %.2f min in %d iterations (%.2f Krylov iters per Newton step)" % (time, linear_its, linear_its/float(nonlinear_its))))

File("output/solution.pvd").write(u)

@florian98765
Copy link
Contributor Author

Great, I also talked with Matt Knepley about this (https://arxiv.org/pdf/1912.08516.pdf) and I would like to try it for our eddy current application.

I am wondering if the code also works for the singular system without the mass term.
While you solve (u,v) + alpha (curl u, curl(v) = (f,v), the eddy current system looks like (curl u, curl v) + j omega sigma (u,v) = (f,v). The conductivity sigma is zero in the non-conducting region, which leads to a singular system matrix. Is the method still working if the mass term is missing?

best wishes
Florian

@rckirby
Copy link
Contributor

rckirby commented Apr 6, 2020 via email

@florian98765
Copy link
Contributor Author

florian98765 commented Apr 9, 2020 via email

@rckirby
Copy link
Contributor

rckirby commented Apr 9, 2020 via email

@rckirby
Copy link
Contributor

rckirby commented Apr 9, 2020 via email

@djanekovic
Copy link
Contributor

Hi @florian98765,

A while ago I implemented function that builds discrete_gradient matrix, code is listed below:

from petsc4py import PETSc

def build_gradient(V, P1):
    assert V.mesh() == P1.mesh()

    dm = V.mesh()._plex
    estart, eend = dm.getHeightStratum(1)
    vstart, vend = dm.getHeightStratum(2)

    rset = V.dof_dset
    cset = P1.dof_dset

    nrows = rset.layout_vec.getSizes()
    ncols = cset.layout_vec.getSizes()

    G = PETSc.Mat().create()
    G.setType(PETSc.Mat.Type.AIJ)
    G.setLGMap(rmap=rset.lgmap, cmap=cset.lgmap)
    G.setSizes(size=(nrows, ncols), bsize=1)
    G.setPreallocationNNZ(2)
    G.setUp()

    Vsec = V.dm.getDefaultSection()
    Psec = P1.dm.getDefaultSection()
    for e in range(estart, eend):
        vlist = dm.getCone(e)
        e = Vsec.getOffset(e)
        vvals = list(map(Psec.getOffset, vlist))
        G.setValuesLocal(e, vvals, [-1, 1])
    G.assemble()

    return G

This will probably be slow but it works. To upstream this we should generate a kernel that would assemble the matrix.
More importantly and to answer you question, complex branch is usable but hypre does not implement support for complex AMS, see issue hypre-space/hypre#152.

@ScottMacLachlan
Copy link
Contributor

@djanekovic : If I'm reading this correctly (and my apologies if I'm not), this works just for the lowest-order case. We should be able to do this more generally, since the discrete gradient is defined by interpolations of gradients of the Lagrange basis functions in the Nedelec space. That was difficult to do at some point in the past when I looked at it, but I think we have all the tools in place now to do it with generality. (This is on my personal to-do list, but buried deeply enough that I won't get to it in the next few weeks...)

Also: the ERF preconditioner in [1] works quite well for the complex case, if one happens to have a problem of that form.

[1] https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/103394/geo2015-0013.1.pdf

@wence-
Copy link
Contributor

wence- commented Sep 3, 2020

Oh yeah, it's trivial:

V = FunctionSpace(mesh, "P", 2)
Q = FunctionSpace(mesh, "N1curl", 2)
G = Interpolator(grad(TestFunction(V)), Q).callable().handle

Gives you a petsc matrix G that is the discrete gradient matrix I think (only tested for degree 1).

@ScottMacLachlan
Copy link
Contributor

Thanks, @wence- ! I thought this was going to be easy now...

@wence-
Copy link
Contributor

wence- commented Sep 3, 2020

And if you want ADS, you do

...
P = FunctionSpace(mesh, "RT", 2)
C = Interpolator(curl(TestFunction(Q)), P).callable().handle

Note that this one doesn't just have +1/-1 in it for the lowest order case due to the scaling of the dofs for RT.

@florian98765
Copy link
Contributor Author

Sounds great. Thanks for the update.

Regarding the preconditioner Hypre AMS can also be called without Gradent Matrix. Instead one can provide either a coordinate array (pc.setCoordinates) or an constant vector (setHYPRESetEdgeConstantVectors). As far as I understand hypre uses this input to build the gradient matrix by itself.

If I remember correctly, the code using pc.setCoordinates was much less stable then the setHYPRESetEdgeConstantVectors version. Another limitation would be that it could work only for order=1 case. But I am also not sure if hypre can handle higher order gradient matrices at all.

One open problem that I am facing is that the method becomes unstable if sigma = 0 only on some parts of the domain. For sigma = 0 everywhere pc.setHYPRESetBetaPoissonMatrix(None) leads to stable results. When sigma != 0 the beta-poisson-matrix cannot be set to zero. If one uses sigma << 1 as regularization, the code also works, but for sigma = 0, the method does not converge any more. It is also prossible to manually provide a beta-poisson-matrix but this made things even worse (maybe I was not able to build it correctly).

Setting a small sigma does not hurt, but I am not sure if this is an expected behaviour?

Nevertheless the code seems to work quite well, so I think this thread could be closed.

Thanks for all comments, best wishes
Florian

@xiaodizhang29
Copy link
Contributor

Hello,
I am going to employ the AMS/ADS Preconditioner within FEniCS/Firedrake. Could you tell me how to get the examples?

best wishes
Xiaodi

@florian98765
Copy link
Contributor Author

Hello, this is what I currently use. It is a slightly adapted version of the code that Rob Kirby provided.
The preconditioning code is included in a separate python file MyAMS_preconditioner.py

import firedrake
from firedrake.petsc import PETSc
import numpy as np
from pyop2.datatypes import ScalarType
import loopy
from pyop2 import op2, petsc_base
from firedrake import FunctionSpace, Function, Constant, project
from firedrake.assemble import allocate_matrix, create_assembly_callable

def kernel_gen(key):
    kernel_dict = {
        'triangle' : loopy.make_function("[] -> {[]}",
                                         """
A[0, 0] = 0.0
A[0, 1] = -1.0
A[0, 2] = 1.0
A[1, 0] = -1.0
A[1, 1] = 0.0
A[1, 2] = 1.0
A[2, 0] = -1.0
A[2, 1] = 1.0
A[2, 2] = 0.0
""",
                                         [loopy.GlobalArg("A", dtype=ScalarType,
                                                          shape=(3, 3))],
                                         name="my_kernel",
                                         seq_dependencies=True,
                                         target=loopy.CTarget()),
        'quadrilateral' : loopy.make_function("[] -> {[]}",
                                         """
A[0, 0] = -1.0
A[0, 1] = 1.0
A[0, 2] = 0.0
A[0, 3] = 0.0
A[1, 0] = 0.0
A[1, 1] = 0.0
A[1, 2] = -1.0
A[1, 3] = 1.0
A[2, 0] = -1.0
A[2, 1] = 0.0
A[2, 2] = 1.0
A[2, 3] = 0.0
A[3, 0] = 0.0
A[3, 1] = -1.0
A[3, 2] = 0.0
A[3, 3] = 1.0

""",
                                         [loopy.GlobalArg("A", dtype=ScalarType,
                                                          shape=(4, 4))],
                                         name="my_kernel",
                                         seq_dependencies=True,
                                         target=loopy.CTarget()),
        'tetrahedron' :
        loopy.make_function("[] -> {[]}",
                            """
A[0, 0] = 0.0
A[0, 1] = 0.0
A[0, 2] = -1.0
A[0, 3] = 1.0

A[1, 0] = 0.0
A[1, 1] = -1.0
A[1, 2] = 0.0
A[1, 3] = 1.0

A[2, 0] = 0.0
A[2, 1] = -1.0
A[2, 2] = 1.0
A[2, 3] = 0.0

A[3, 0] = -1.0
A[3, 1] = 0.0
A[3, 2] = 0.0
A[3, 3] = 1.0

A[4, 0] = -1.0
A[4, 1] = 0.0
A[4, 2] = 1.0
A[4, 3] = 0.0

A[5, 0] = -1.0
A[5, 1] = 1.0
A[5, 2] = 0.0
A[5, 3] = 0.0
""",
                            [loopy.GlobalArg("A", dtype=ScalarType,
                                             shape=(6, 4))],
                            name="my_kernel",
                            seq_dependencies=True,
                            target=loopy.CTarget()),
    }
    return kernel_dict[key]


def build_gradient(V, P1):
    mesh = V.mesh()

    V_map = V.cell_node_map()
    P1_map = P1.cell_node_map()

    edges = op2.Set(mesh.num_edges())
    vertices = op2.Set(mesh.num_vertices())

    dedges = op2.DataSet(edges, dim=1)
    dvertices = op2.DataSet(vertices, dim=1)

    dedges.set = V_map.toset
    dvertices.set = P1_map.toset

    sparsity = op2.Sparsity((dedges, dvertices), [(V_map,P1_map)])
    my_matrix = petsc_base.Mat(sparsity, float)

    mesh_key = str(mesh.ufl_cell())
    my_kernel = op2.Kernel(kernel_gen(mesh_key), "my_kernel")

    op2.par_loop(my_kernel, mesh.cell_set,
                 my_matrix(op2.WRITE, (V.cell_node_map(), P1.cell_node_map())))

    my_matrix.assemble()
    my_matrix = my_matrix.handle
    return my_matrix

class HypreAMS(firedrake.PCSNESBase):
    def initialize(self, obj):
        _, P = obj.getOperators()
        prefix = obj.getOptionsPrefix()
        appctx = self.get_appctx(obj)

        f = appctx['state']
        element = str(f.function_space().ufl_element().family()) #not sure this is right
        mesh = f.function_space().mesh()

        V = FunctionSpace(mesh, element, 1)
        P1 = FunctionSpace(mesh, "Lagrange", 1)
        G = build_gradient(V, P1)

        pc = PETSc.PC().create()
        pc.setOptionsPrefix(prefix + "HypreAMS_")
        pc.setOperators(P, P)

        pc.setType('hypre')
        pc.setHYPREType('ams')
        pc.setHYPREDiscreteGradient(G)
        pc.setHYPRESetBetaPoissonMatrix(None)
#        pc.setCoordinates(np.reshape(
#            mesh._plex.getCoordinates().getArray(),
#            (-1,mesh.geometric_dimension())))

        # Build constants basis for the Nedelec space
        constants = []
        cvecs = []
        for i in range(3):
            direction = [1.0 if i == j else 0.0 for j in range(3)]
            c = project(Constant(direction), V)
            with c.vector().dat.vec_ro as cvec:
                cvecs.append(cvec)
        pc.setHYPRESetEdgeConstantVectors(cvecs[0], cvecs[1], cvecs[2])
        pc.setUp()

        self.pc = pc

    def apply(self, pc, x, y):
        self.pc.apply(x, y)

    def applyTranspose(self, pc, x, y):
        self.pc.applyTranspose(x, y)

    def view(self, pc, viewer=None):
        super(HypreAMS, self).view(pc, viewer)
        viewer.pushASCIITab()
        if hasattr(self, "pc"):
            viewer.printfASCII("PC to apply inverse\n")
            self.pc.view(viewer)
        viewer.popASCIITab()

    def update(self, pc):
        pass

The preconditioner can then be directly selected by passing the proper solver_parameters, e.g.:

params = {'snes_type': 'ksponly',
          'ksp_type': 'cg',
          'pc_type': 'python',
          'pc_python_type': 'MyHypreAMS_preconditioner.HypreAMS',
         }
solve(a == L, A, bcs, solver_parameters=params)

@wence-
Copy link
Contributor

wence- commented Sep 22, 2020

@florian98765 since this comes up a lot, would you mind preparing your AMS preconditioner as one we can merge in Firedrake? It's mostly a case of creating a new file firedrake/preconditioners/ams.py which contains the definition of the HypreAMS class.

@florian98765
Copy link
Contributor Author

florian98765 commented Sep 22, 2020 via email

@xiaodizhang29
Copy link
Contributor

Thank you for your kindness, I'll appreciate it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants