Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use a custom mesh partitioner not in libmesh or moose #5543

Closed
YaqiWang opened this issue Aug 12, 2015 · 31 comments
Closed

Use a custom mesh partitioner not in libmesh or moose #5543

YaqiWang opened this issue Aug 12, 2015 · 31 comments
Labels
C: Framework P: normal A defect affecting operation with a low possibility of significantly affects. T: task An enhancement to the software.

Comments

@YaqiWang
Copy link
Contributor

We have a multiapp, who is using an embedded mesh of the master app, ie every element on this mesh is contained in one single element of the master mesh. To facilitate the data transfer, we'd like to have all elements contained in an master element have the same processor ID.

One way of doing this, which could also be the best way, is let multiapp construct a partitioner with the master mesh and let the multiapp mesh use it. To do this we need the capability described by the title.

We're fine if you have a better way to achieve our goal without going in this route.

@smharper
Copy link
Contributor

Could you elaborate on "facilitate the data transfer"? Do you want to use this partitioning scheme in order to make the transfer easier to code, or do you want it for performance optimization?

@friedmud
Copy link
Contributor

I'm interested in the answer to @smharper 's questions as well.

However, I'll also point out that this actually doesn't have anything to do with "Partitioners". This is a request for a different way to assign communicators inside MultiApp. I currently don't have any idea how difficult this would be to achieve...

@friedmud friedmud added C: Framework T: task An enhancement to the software. P: normal A defect affecting operation with a low possibility of significantly affects. labels Aug 12, 2015
@YaqiWang
Copy link
Contributor Author

We loop through finer mesh on multiapp. On each element we assembly the mass matrix and the right-hand-side contributed from solution on the coarser master mesh, and finish the projection by multiply the right-hand-side with the inverse of the mass matrix. We do prolongation similarly on the finer mesh. If the above condition is satisfied, we do not need do any communication for the projection and prolongation. Our solution vector can be non-ghosted.

@smharper
Copy link
Contributor

I think we should first try making the transfer with parallel communication, then check it's performance empirically. If we can make a transfer work quickly without messing with the partitioning, then the end result will probably be less buggy, easier to maintain, and more flexible.

There are a few tricks we can use to minimize the amount of communication like checking processor bounding boxes and caching a communication map. I can help you implement that stuff if you like. If it's too slow after all of that, then we could return to your idea for partitioning.

@YaqiWang
Copy link
Contributor Author

@snschune can you give @smharper the c5g7 mesh and the underline regular mesh and let him play with his transfer?

We actually have another reason to use this embedded mesh and the embedded partitioning. We need to project and prolong some quantities on the coarse element faces.

@YaqiWang
Copy link
Contributor Author

Oh I forgot to mention, our solutions are elemental, both the source and the target solution.

@snschune
Copy link
Contributor

@smharper do you have access to schuseba/rattlesnake? I'd be hesitant to have these files on github.

@smharper
Copy link
Contributor

I do not have access

@snschune
Copy link
Contributor

Actually, you probably prefer simpler tests which you can find in

yak/tests/transfers/embedded_mesh_transfers

Take a look at volume_rr_prolongation.

@YaqiWang
Copy link
Contributor Author

Or directly copy the files to Sterling and ask him to not distribute them ;-)

@YaqiWang
Copy link
Contributor Author

Can we create a partitioner warehouse?

@permcody
Copy link
Member

I doubt we'll need one. I don't see why would need more than one at a time so a pointer should do.

@YaqiWang
Copy link
Contributor Author

Then the question is how the MooseMesh accepts a pointer? Through its valid parameters? That looks pretty ugly. Why damper has a warehouse?

I take my word back. I think we may just need MooseMesh provide a function which will set its partitioner to the one passed in. We can move some of the code in MooseMesh into an separate action, which creates the partitioner and set it to MooseMesh.

@permcody
Copy link
Member

Yes, we can pass pointers (or even shared pointers) through input parameters. We do this quite a bit when various objects need pointers to parts of the system. We fill those in in the parser or factory. The other option is that we just set it later and make sure we check that it's valid before use.

Dampers have a warehouse because we can theoretically have several of them in a simulation.

@smharper
Copy link
Contributor

Update on this: Your volume transfer is very similar to MultiAppProjectionTransfer so I've been trying to make timing comparisons between the two to see how expensive MPI communication really is. Unfortunately, I haven't been able to get rigorous results yet because I've discovered an MPI bug. (I also discovered an error with high order shape functions when testing against your transfer, so thanks for putting me on this path.) But at first glance, it looks like your transfer with the nested partitioning scales noticeably better.

Unless you have other ideas for custom partitioners, I think we should just implement a nested partitioner like this directly in MOOSE. We could add an input parameter like use_nested_partitoner = true to the MultiApp object.

@friedmud
Copy link
Contributor

Again: The issue with nested MultiApp solves is NOT a Partitioner issue. Getting nested MultiApp solves requires work in MultiApp that has nothing to do with Partitioners.

Is there another reason why you want to have a Partitioner warehouse?

@snschune
Copy link
Contributor

In this case the mesh is nested, not really the multi app. The multi app structure is really simple.
master: diffusion solve
sub: sn transport solve
We want to be able to do transfers without explicit communication so a fine mesh elements that belong
to coarse mesh element coarse_elem must all have processor_id = coarse_elem->processor_id().

@smharper
Copy link
Contributor

Why is it not a Partitioner issue? The whole point of this is so that we can do a transfer without MPI communication. In order for that to happen, the memory for a particular master app element needs to be on the same processor as the memory for the nested sub app element. I.e. the we need nested Partitioning, right?

@friedmud
Copy link
Contributor

I was under the impression that one of them was only a subset of the domain of the the other. And then you used MultiApps to "tile" the subsets inside the master App.

Are you saying that you only have 1 master and 1 sub and the both cover the same domain with different meshes?

@snschune
Copy link
Contributor

Yes, correct, but there are even more restrictions though: the fine mesh must be nested in the coarse mesh.

@friedmud
Copy link
Contributor

Ok - then I was wrong. You do need a custom Partitioner. ;-)

We don't have a pluggable System for that yet, do we? Probably should make one. It would be pretty similar to Markers.

@snschune
Copy link
Contributor

Great thanks for the input!

snschune pushed a commit to snschune/moose that referenced this issue Sep 23, 2015
@snschune
Copy link
Contributor

@YaqiWang @permcody My plan is to use a Partitioner action to set _custom_partitioner in MooseMesh that is used to set the partitioner of its type is set to custom.

The problem is the order in which I need to set up things:

  1. Construct mesh object
  2. Set _custom_partitioner using PartitionerAction
  3. Set partitioner in MooseMesh.

To do that I will need to move setting the partitioner in MooseMesh from the constructor
to somewhere else, then add another SetupMeshIntermediate action that calls this before
MooseMesh::init is called.

Does that make sense to do it this way?

@permcody
Copy link
Member

Yes, That should work fine. We'll want to create a new task
"set_mesh_partiioner". We can either satisfy that task with a new Action or
one of the existing ones if that makes sense.

On Wed, Sep 23, 2015 at 10:25 AM Sebastian Schunert <
notifications@github.com> wrote:

@YaqiWang https://github.com/YaqiWang @permcody
https://github.com/permcody My plan is to use a Partitioner action to
set _custom_partitioner in MooseMesh that is used of partitioner type is
set to custom.

The problem is the order in which I need to set up things:

  1. Construct mesh object
  2. Set _custom_partitioner using PartitionerAction
  3. Set partitioner in MooseMesh.

To do that I will need to move setting the partitioner in MooseMesh from
the constructor
to somewhere else, then add another SetupMeshIntermediate action that
calls this before
MooseMesh::init is called.

Does that make sense to do it this way?


Reply to this email directly or view it on GitHub
#5543 (comment).

snschune pushed a commit to snschune/moose that referenced this issue Sep 23, 2015
@snschune
Copy link
Contributor

snschune commented Oct 1, 2015

I would like to create an easy way of adding new partitioner now. For that purpose I plan to create a
class MooseNativePartitioner that inherits from libMesh::Partitioner and MoosePartitioner. Then I override the getPartitioner function and return this. If you want to add a partitioner, say in yak, you can now
inherit from MooseNativePartitioner. Does that make sense?

@permcody
Copy link
Member

permcody commented Oct 1, 2015

Almost... Why not make the user inheritance easier though? Users should only need to inherit from one class not two. If you design your base class right this should be possible.

@snschune
Copy link
Contributor

snschune commented Oct 1, 2015

I intent MooseNativePartitioner to be this base class, i.e. the (sole) base for all partitioners that are added to the herd (as opposed to being implemented in Libmesh).
Another option is that MoosePartitioner itself inherits from Partitioner making MooseNativePartitioner redundant, but the problem would be how to make LibmeshPartitioner work. LibmeshPartitioner basically "wraps" partitioners implemented in Libmesh that themselves inherit from Partitioner.

@permcody
Copy link
Member

permcody commented Oct 1, 2015

Well you basically you are designing a system around perceived limitations
in the way libMesh expects to use Partitioners which ends up creating two
somewhat related by not quite compatible trees. I'd rather not think about
LibMeshPartitioner vs MoosePartitioner vs MooseNativePartitioner.
That's already very confusing especially since libMesh partitioner actually
inherits from MOOSE!

It's very easy to make a derived class behave like the base class. That's
the normal OO case. So why all the extra classes? Not having looked at it
myself, is there something that can be changed/added to libMesh to make
this whole inheritance hierarchy less cluttered? Maybe, maybe not, I still
don't think we quite succeed with the ParsedFunction base classes, we have
a whole pile of those and they are confusing but at least new derivations
always just inherit from a single class.

Just think about your design and if it requires changes to libMesh, let's
entertain that as a possibility as well.

On Thu, Oct 1, 2015 at 8:52 AM Sebastian Schunert notifications@github.com
wrote:

I intent MooseNativePartitioner to be this base class, i.e. the (sole)
base for all partitioners that are added to the herd (as opposed to being
implemented in Libmesh).
Another option is that MoosePartitioner itself inherits from Partitioner
making MooseNativePartitioner redundant, but the problem would be how to
make LibmeshPartitioner work. LibmeshPartitioner basically "wraps"
partitioners implemented in Libmesh that themselves inherit from
Partitioner.


Reply to this email directly or view it on GitHub
#5543 (comment).

@friedmud
Copy link
Contributor

friedmud commented Oct 1, 2015

Might as well throw a couple of pennies in here...

Firstly... this is going to take some time to get right. It might take a couple of iterations too...

@snschune was just trying to copy the way do we Preconditioners here. MoosePreconditioner is just a base class that doesn't inherit from anything. PhysicsBasedPreconditioner inherits from both MoosePreconditioner and the libMesh Preconditioner. That's not terrible... but maybe we can do better for Partitioners.

Unlike Preconditioners (some of which just set up matrices and AREN'T actually "libmesh preconditioners") all Partitioners will need to satisfy the libMesh interface.

The libMesh interface mostly comes down to _do_partition() ( https://github.com/libMesh/libmesh/blob/master/include/partitioning/partitioner.h#L145 ). It's not very Mooseish... but it'll do.

I propose this:

MoosePartitioner inherits from libMesh Partitioner. People making their own partitioners can inherit from MoosePartitioner and override _do_partition() (if you're really opposed to that we can just forward the call through MoosePartitioner::doPartition() ;-)

Then we make libMeshPartitioner that inherits from MoosePartitioner (yeah... I know... but I don't see a way around it). It creates and holds a real libMesh Partitioner (like CentroidPartitioner, LinearPartitioner, etc.) and simply forwards _do_partitioner() through to the Partitioner * it's holding. It presents an enum InputParameter for picking what kind of libMesh Partitioner to use.

snschune pushed a commit to snschune/moose that referenced this issue Oct 5, 2015
snschune pushed a commit to snschune/moose that referenced this issue Oct 12, 2015
snschune pushed a commit to snschune/moose that referenced this issue Oct 21, 2015
snschune pushed a commit to snschune/moose that referenced this issue Oct 21, 2015
permcody added a commit that referenced this issue Oct 27, 2015
permcody added a commit that referenced this issue Oct 28, 2015
waxmanr pushed a commit to waxmanr/moose that referenced this issue Oct 28, 2015
@YaqiWang
Copy link
Contributor Author

@snschune You possibly can close this issue now?

@YaqiWang
Copy link
Contributor Author

Yes, confirmed with Sebastian. Close it now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: Framework P: normal A defect affecting operation with a low possibility of significantly affects. T: task An enhancement to the software.
Projects
None yet
Development

No branches or pull requests

5 participants