Skip to content
This repository has been archived by the owner on Sep 19, 2019. It is now read-only.

fenics.AdaptiveNonlinearVariationalSolver doesn't work in parallel #48

Open
agzimmerman opened this issue Aug 11, 2017 · 11 comments
Open
Milestone

Comments

@agzimmerman
Copy link
Member

Well this was unexpected!

/usr/local/lib/python2.7/dist-packages/dolfin/fem/adaptivesolving.py:124: RuntimeError
entering PDB

phaseflow/core.py:226: in run
solve_time_step=solve_time_step)
phaseflow/time.py:30: in adaptive_time_step
converged = solve_time_step(dt=time_step_size.value, w=w, w_n=w_n, bcs=bcs)
phaseflow/solver.py:129: in solve_time_step
converged = solve(problem=problem, M=M)
phaseflow/solver.py:103: in solve
solver.solve(adaptive_space_error_tolerance)


self = <dolfin.fem.adaptivesolving.AdaptiveNonlinearVariationalSolver; proxy of <Swig...hared_ptr< dolfin::AdaptiveNonlinearVariationa
lSolver > *' at 0x7fc239b70f90> >
tol = 0.0001

def solve(self, tol):
    """
        Solve such that the estimated error in the functional 'goal'
        is less than the given tolerance 'tol'.

        *Arguments*

            tol (float)

                The error tolerance
        """

    # Call cpp.AdaptiveNonlinearVariationlSolver.solve with ec
  cpp.AdaptiveNonlinearVariationalSolver.solve(self, tol)

E RuntimeError:
E
E *** -------------------------------------------------------------------------
E *** DOLFIN encountered an error. If you are not able to resolve this issue
E *** using the information listed below, you can ask for help at
E ***
E *** fenics-support@googlegroups.com
E ***
E *** Remember to include the error message listed below and, if possible,
E *** include a minimal running example to reproduce the error.
E ***
E *** -------------------------------------------------------------------------
E *** Error: Unable to perform operation in parallel.
E *** Reason: Extrapolation of functions is not yet working in parallel.
E *** Consider filing a bug report at https://bitbucket.org/fenics-project/dolfin/issues.
E *** Where: This error was encountered inside log.cpp.
E *** Process: 0
E ***
E *** DOLFIN version: 2017.1.0
E *** Git changeset: 3d1f687ec9ee39afc0fe6e01800431995b42ad04
E *** -------------------------------------------------------------------------

/usr/local/lib/python2.7/dist-packages/dolfin/fem/adaptivesolving.py:124: RuntimeError
entering PDB

/usr/local/lib/python2.7/dist-packages/dolfin/fem/adaptivesolving.py(124)solve()
-> cpp.AdaptiveNonlinearVariationalSolver.solve(self, tol)
(Pdb) /usr/local/lib/python2.7/dist-packages/dolfin/fem/adaptivesolving.py(124)solve()
-> cpp.AdaptiveNonlinearVariationalSolver.solve(self, tol)

@agzimmerman agzimmerman added this to the HPC milestone Aug 11, 2017
@agzimmerman
Copy link
Member Author

As of now we no longer use the adaptive solvers of fenics, but rather we mark and refine cells in the more conventional manner.

@agzimmerman
Copy link
Member Author

Regarding my comment from August 29, for a while now we have returned to using the built-in fenics.AdaptiveNonlinearVariationalSolver, so this issue is critical to our HPC goals.

@agzimmerman
Copy link
Member Author

agzimmerman commented Mar 10, 2018

The BitBucket issue status for this error is "won't fix". Per Chris Richardson: "This is known not to work in parallel. Any suggestions for a suitable fix would be welcome."

https://bitbucket.org/fenics-project/dolfin/issues/985/adaptive-solver-demo-auto-adaptive-poisson

@agzimmerman
Copy link
Member Author

agzimmerman commented Mar 12, 2018

The error messages comes from log.cpp, e.g. https://bitbucket.org/fenics-project/dolfin/src/3d1f687ec9ee39afc0fe6e01800431995b42ad04/dolfin/log/log.cpp

void dolfin::not_working_in_parallel(std::string what)
{
  if (MPI::size(MPI_COMM_WORLD) > 1)
  {
    dolfin_error("log.cpp",
                 "perform operation in parallel",
                 "%s is not yet working in parallel.\n"
                 "***          Consider filing a bug report at %s",
                 what.c_str(), "https://bitbucket.org/fenics-project/dolfin/issues");
  }
}

@agzimmerman
Copy link
Member Author

agzimmerman commented Mar 12, 2018

The log method is called by the Extrapolation class method's extrapolate, e.g. https://bitbucket.org/fenics-project/dolfin/src/3d1f687ec9ee39afc0fe6e01800431995b42ad04/dolfin/adaptivity/Extrapolation.cpp

Here's an excerpt from the extrapolate method with calls not_working_in_parallel after having included #include <dolfin/log/log.h>:

void Extrapolation::extrapolate(Function& w, const Function& v)
{
  // Using set_local for simplicity here
  not_working_in_parallel("Extrapolation of functions");

  // Check that the meshes are the same

@agzimmerman
Copy link
Member Author

It's interesting that the programmer commented "Using set_local for simplicity here". Are they referring to the call to not_working_in_parallel?

@agzimmerman
Copy link
Member Author

I'm digging through Extrapolation.cpp line by line, and I'll document my questions and answers here.

@agzimmerman
Copy link
Member Author

agzimmerman commented Mar 12, 2018

Extrapolation::extrapolate calls Mesh::init (e.g. from https://bitbucket.org/fenics-project/dolfin/src/3d1f687ec9ee39afc0fe6e01800431995b42ad04/dolfin/mesh/Mesh.h)

// Initialize cell-cell connectivity
  const std::size_t D = mesh.topology().dim();
  mesh.init(D, D);

The documentation for Mesh explains what this does:

  /// A _Mesh_ consists of a set of connected and numbered mesh entities.
  ///
  /// Both the representation and the interface are
  /// dimension-independent, but a concrete interface is also provided
  /// for standard named mesh entities:
  ///
  /// | Entity | Dimension | Codimension  |
  /// | ------ | --------- | ------------ |
  /// | Vertex |  0        |              |
  /// | Edge   |  1        |              |
  /// | Face   |  2        |              |
  /// | Facet  |           |      1       |
  /// | Cell   |           |      0       |
  ///
  /// When working with mesh iterators, all entities and connectivity
  /// are precomputed automatically the first time an iterator is
  /// created over any given topological dimension or connectivity.
  ///
  /// Note that for efficiency, only entities of dimension zero
  /// (vertices) and entities of the maximal dimension (cells) exist
  /// when creating a _Mesh_. Other entities must be explicitly created
  /// by calling init(). For example, all edges in a mesh may be
  /// created by a call to mesh.init(1). Similarly, connectivities
  /// such as all edges connected to a given vertex must also be
  /// explicitly created (in this case by a call to mesh.init(0, 1)).

So mesh.init(D, D); computes the cell to cell connectivity, which is otherwise not computed when instantiating a mesh.

@agzimmerman
Copy link
Member Author

I noticed that Extrapolation::extrapolate isn't actually documented in the source code.

From some old docs:

class dolfin.cpp.fem.Extrapolation(*args)
Bases: object

This class implements an algorithm for extrapolating a function on a given function space from an approximation of that function on a possibly lower-order function space.

This can be used to obtain a higher-order approximation of a computed dual solution, which is necessary when the computed dual approximation is in the test space of the primal problem, thereby being orthogonal to the residual.

It is assumed that the extrapolation is computed on the same mesh as the original function.

static extrapolate(*args)
Compute extrapolation w from v

@agzimmerman
Copy link
Member Author

Why does the dual-weighted residual method for goal-oriented AMR have to extrapolate? I don't remember this being part of the algorithm.

@agzimmerman
Copy link
Member Author

We can try extending dolfin in the Docker container. http://fenics-containers.readthedocs.io/en/latest/developing.html

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant