Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with VectorTools::interpolate_based_on_material_id #7522

Closed
bangerth opened this issue Dec 11, 2018 · 4 comments · Fixed by #7527
Closed

Problem with VectorTools::interpolate_based_on_material_id #7522

bangerth opened this issue Dec 11, 2018 · 4 comments · Fixed by #7527
Milestone

Comments

@bangerth
Copy link
Member

From the mailing list in a post by Hamed Babaei:

I would like to forcefully determine the solution at some regions of my domain in which I solve an initial value problem. Therefore, I chose to use VectorTools::interpolate_based_on_material_id function to implement the desired constant value for solution vector within particular regions for which I designate different material_ID. However, the problem is that despite the function description saying:

"If a material id of some group of cells is missed in function_map, then dst will not be updated in the respective degrees of freedom of the output vector For example, if dst was successfully initialized to capture the degrees of freedom of the dof_handler of the problem with all zeros in it, then those zeros which correspond to the missed material ids will still remain in dst even after calling this function."

I receive -nan value for the cells whose material id is not included in function_map when using the interpolate_based_on_material_id function. However, if I include all the material ids in the function_map it works as expected for the entire domain. This is not what I need because I would like to keep solution constant only for a small part of the domain, not the entire domain.

@bangerth
Copy link
Member Author

And here's the (slightly modified) code that shows the problem:

#include <deal.II/lac/vector.h>
#include <deal.II/lac/full_matrix.h>
#include <deal.II/lac/solver_cg.h>
#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/trilinos_sparse_matrix.h>
#include <deal.II/lac/trilinos_vector.h>
#include <deal.II/lac/trilinos_precondition.h>
#include <deal.II/lac/trilinos_solver.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_accessor.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/fe/fe_values.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/numerics/vector_tools.h>
#include <deal.II/numerics/data_out.h>
#include <deal.II/numerics/error_estimator.h>
#include <deal.II/base/utilities.h>
#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/index_set.h>
#include <deal.II/lac/sparsity_tools.h>
#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/grid_refinement.h>
#include <fstream>
#include <iostream>


namespace PhaseField
{
  using namespace dealii;

  typedef TrilinosWrappers::MPI::Vector vectorType;
  typedef TrilinosWrappers::SparseMatrix matrixType;


  ///////////////////////////////////////////////////////////////

  template <int dim>
  class Solid
  {
  public:
    Solid();

    virtual
    ~Solid();

    void
    run();

  private:

    void    make_grid();

    MPI_Comm                         mpi_communicator;
    parallel::distributed::Triangulation<dim> triangulation;
    ConditionalOStream               pcout;
    FE_Q<dim>                        fe;
    DoFHandler<dim>                  dof_handler;
    IndexSet                         locally_owned_dofs;
    IndexSet                         locally_relevant_dofs;
  };


  // constructor
  template <int dim>
  Solid<dim>::Solid()
  :
  mpi_communicator (MPI_COMM_WORLD),
  triangulation (mpi_communicator,
                 typename Triangulation<dim>::MeshSmoothing
                 (Triangulation<dim>::smoothing_on_refinement |
                  Triangulation<dim>::smoothing_on_coarsening)),

  pcout (std::cout,
         (Utilities::MPI::this_mpi_process(mpi_communicator)
          == 0)),
  fe (1),
  dof_handler (triangulation)
  {}

  //destructor
  template <int dim>
  Solid<dim>::~Solid()
  {
    dof_handler.clear();
  }


  ///////////////////////////////////////////////////////
  template <int dim>
  void Solid<dim>::make_grid()
  {

    std::vector< unsigned int > repetitions(dim, 4);
    if (dim == 3)
      repetitions[dim-2] = 4;
    repetitions[dim-3] = 4;

    GridGenerator::subdivided_hyper_rectangle(triangulation,
    	                                      repetitions,
                                              Point<dim>(0.0, 0.0, 0.0),
                                              Point<dim>(1.0, 1.0, 1.0),
                                              true);


    for (typename Triangulation<dim>::active_cell_iterator
           cell = triangulation.begin_active();
         cell != triangulation.end(); ++cell)

      if(cell->center()[0]<0.5)
        cell->set_material_id (1);
      else
        cell->set_material_id (0);


  }

  //////////////////////////////////////////////////
  template <int dim>
  void Solid<dim>::run()
  {
    make_grid();
    dof_handler.distribute_dofs (fe);

    locally_owned_dofs = dof_handler.locally_owned_dofs ();
    vectorType  dst(locally_owned_dofs, mpi_communicator);

    pcout << " Initial values: " << std::endl;
    dst.print(std::cout);

    std::map< types::material_id, const Function<dim>* > function_map;

    const Functions::ConstantFunction<dim> constant_function_1(1.0);
    function_map[1] = &constant_function_1;

    VectorTools::interpolate_based_on_material_id(MappingQGeneric<dim>(1),
                                                  dof_handler,
                                                  function_map,
                                                  dst);


    pcout << " Final values: " << std::endl;
    dst.print(std::cout);


  }


}
/////////////////////////////////////////////////////////
int main (int argc, char *argv[])
{
  try
    {
      using namespace dealii;
      using namespace PhaseField;

      //deallog.depth_console(0);

      Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

      Solid<3> solid_3d;
      solid_3d.run();
    }
  catch (std::exception &exc)
    {
      std::cerr << std::endl << std::endl
                << "----------------------------------------------------"
                << std::endl;
      std::cerr << "Exception on processing: " << std::endl << exc.what()
                << std::endl << "Aborting!" << std::endl
                << "----------------------------------------------------"
                << std::endl;

      return 1;
    }
  catch (...)
    {
      std::cerr << std::endl << std::endl
                << "----------------------------------------------------"
                << std::endl;
      std::cerr << "Unknown exception!" << std::endl << "Aborting!"
                << std::endl
                << "----------------------------------------------------"
                << std::endl;
      return 1;
    }

  return 0;
}

I can confirm the problem. The output I get is this:

0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 
 Final values:                                                                                                                                                
1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 -nan -nan -nan -nan -nan -nan -nan -nan 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 -nan -nan -nan -nan 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 -nan -nan -nan -nan 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 -nan -nan -nan -nan 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 -nan -nan -nan -nan 1.000e+00 1.000e+00 1.000e+00 -nan -nan 1.000e+00 1.000e+00 1.000e+00 -nan -nan 1.000e+00 1.000e+00 1.000e+00 -nan -nan 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 -nan -nan -nan -nan 1.000e+00 1.000e+00 1.000e+00 -nan -nan 1.000e+00 1.000e+00 1.000e+00 -nan -nan 1.000e+00 1.000e+00 1.000e+00 -nan -nan 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 1.000e+00 -nan -nan -nan -nan 1.000e+00 1.000e+00 1.000e+00 -nan -nan 1.000e+00 1.000e+00 1.000e+00 -nan -nan 1.000e+00 1.000e+00 1.000e+00 -nan -nan

@bangerth
Copy link
Member Author

The problem comes from this block at the end of the internal interpolate function:

      for (const auto i : interpolation.locally_owned_elements())
        {
          const auto value =
            ::dealii::internal::ElementAccess<VectorType>::get(interpolation,
                                                               i);
          const auto weight =
            ::dealii::internal::ElementAccess<VectorType>::get(weights, i);
          ::dealii::internal::ElementAccess<VectorType>::set(value / weight,
                                                             i,
                                                             vec);
        }

Since we didn't touch a DoF, we divide by zero.

@bangerth
Copy link
Member Author

I think I have a fix.

@hbabaei
Copy link

hbabaei commented Dec 12, 2018

Sounds great! You are right. It seems to be the source of the issue arising for distributed vectors!
So glad that this is going to be resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants