New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use a custom mesh partitioner not in libmesh or moose #5543
Comments
Could you elaborate on "facilitate the data transfer"? Do you want to use this partitioning scheme in order to make the transfer easier to code, or do you want it for performance optimization? |
I'm interested in the answer to @smharper 's questions as well. However, I'll also point out that this actually doesn't have anything to do with "Partitioners". This is a request for a different way to assign communicators inside |
We loop through finer mesh on multiapp. On each element we assembly the mass matrix and the right-hand-side contributed from solution on the coarser master mesh, and finish the projection by multiply the right-hand-side with the inverse of the mass matrix. We do prolongation similarly on the finer mesh. If the above condition is satisfied, we do not need do any communication for the projection and prolongation. Our solution vector can be non-ghosted. |
I think we should first try making the transfer with parallel communication, then check it's performance empirically. If we can make a transfer work quickly without messing with the partitioning, then the end result will probably be less buggy, easier to maintain, and more flexible. There are a few tricks we can use to minimize the amount of communication like checking processor bounding boxes and caching a communication map. I can help you implement that stuff if you like. If it's too slow after all of that, then we could return to your idea for partitioning. |
Oh I forgot to mention, our solutions are elemental, both the source and the target solution. |
@smharper do you have access to schuseba/rattlesnake? I'd be hesitant to have these files on github. |
I do not have access |
Actually, you probably prefer simpler tests which you can find in yak/tests/transfers/embedded_mesh_transfers Take a look at volume_rr_prolongation. |
Or directly copy the files to Sterling and ask him to not distribute them ;-) |
Can we create a partitioner warehouse? |
I doubt we'll need one. I don't see why would need more than one at a time so a pointer should do. |
Then the question is how the I take my word back. I think we may just need MooseMesh provide a function which will set its partitioner to the one passed in. We can move some of the code in MooseMesh into an separate action, which creates the partitioner and set it to MooseMesh. |
Yes, we can pass pointers (or even shared pointers) through input parameters. We do this quite a bit when various objects need pointers to parts of the system. We fill those in in the parser or factory. The other option is that we just set it later and make sure we check that it's valid before use. Dampers have a warehouse because we can theoretically have several of them in a simulation. |
Update on this: Your volume transfer is very similar to Unless you have other ideas for custom partitioners, I think we should just implement a nested partitioner like this directly in MOOSE. We could add an input parameter like |
Again: The issue with nested MultiApp solves is NOT a Partitioner issue. Getting nested MultiApp solves requires work in MultiApp that has nothing to do with Partitioners. Is there another reason why you want to have a Partitioner warehouse? |
In this case the mesh is nested, not really the multi app. The multi app structure is really simple. |
Why is it not a |
I was under the impression that one of them was only a subset of the domain of the the other. And then you used MultiApps to "tile" the subsets inside the master App. Are you saying that you only have 1 master and 1 sub and the both cover the same domain with different meshes? |
Yes, correct, but there are even more restrictions though: the fine mesh must be nested in the coarse mesh. |
Ok - then I was wrong. You do need a custom We don't have a pluggable System for that yet, do we? Probably should make one. It would be pretty similar to |
Great thanks for the input! |
@YaqiWang @permcody My plan is to use a Partitioner action to set _custom_partitioner in MooseMesh that is used to set the partitioner of its type is set to custom. The problem is the order in which I need to set up things:
To do that I will need to move setting the partitioner in MooseMesh from the constructor Does that make sense to do it this way? |
Yes, That should work fine. We'll want to create a new task On Wed, Sep 23, 2015 at 10:25 AM Sebastian Schunert <
|
I would like to create an easy way of adding new partitioner now. For that purpose I plan to create a |
Almost... Why not make the user inheritance easier though? Users should only need to inherit from one class not two. If you design your base class right this should be possible. |
I intent MooseNativePartitioner to be this base class, i.e. the (sole) base for all partitioners that are added to the herd (as opposed to being implemented in Libmesh). |
Well you basically you are designing a system around perceived limitations It's very easy to make a derived class behave like the base class. That's Just think about your design and if it requires changes to libMesh, let's On Thu, Oct 1, 2015 at 8:52 AM Sebastian Schunert notifications@github.com
|
Might as well throw a couple of pennies in here... Firstly... this is going to take some time to get right. It might take a couple of iterations too... @snschune was just trying to copy the way do we Unlike The libMesh interface mostly comes down to I propose this:
Then we make |
…waring _do_partitioner call (idaholab#5543)
@snschune You possibly can close this issue now? |
Yes, confirmed with Sebastian. Close it now. |
We have a multiapp, who is using an embedded mesh of the master app, ie every element on this mesh is contained in one single element of the master mesh. To facilitate the data transfer, we'd like to have all elements contained in an master element have the same processor ID.
One way of doing this, which could also be the best way, is let multiapp construct a partitioner with the master mesh and let the multiapp mesh use it. To do this we need the capability described by the title.
We're fine if you have a better way to achieve our goal without going in this route.
The text was updated successfully, but these errors were encountered: