Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion] Activate tracers #622

Closed
gassmoeller opened this issue Sep 28, 2015 · 8 comments
Closed

[Discussion] Activate tracers #622

gassmoeller opened this issue Sep 28, 2015 · 8 comments

Comments

@gassmoeller
Copy link
Member

I open this issue to provide an open discussion page for the process of implementing more active particles in ASPECT. Several people have told me over the last months that they would appreciate a more active particle system, if only to check for the difference between the compositional field advection and particle advection mechanisms, or to reproduce benchmarks that rely on particles.

Most of the necessary pieces for this should already be in place with #411, but of course the most important part would be a system to interpolate tracer properties back onto the finite-element mesh. Before @egpuckett or myself go ahead with anything (maybe something very simple as a start), I would like to discuss the design of this interpolation. In essence, I would imagine (pseudo)-function calls like the following to be available for plugins that would like to use the particle system:

std::vector<double> properties_on_point = particle_postprocessor->get_particle_world().map_properties_onto_point(cell_reference,position)

or

std::vector<std::vector<double> > properties_at_quadrature_points = get_particles->get_particle_world().map_properties_onto_quadrature_points(cell_reference)

But how do we implement this best inside of the Particle code? Since there are different preferred implementations of this interpolation, a flexible system of one interface class and several derived classes seems reasonable. But should this be designed in the way of the usual ASPECT plugin structure to allow people the largest flexibility to add mappings with own input parameters? Or would it be more reasonable to keep it a bit more hidden? I feel like this interpolation is more like the different solver schemes that also do not offer a plugin architecture. The benefit of a purely internal base_class/derived_class system would be that it would be easier to eventually move the code into deal.II, if we ever decide to go into that direction.

I would appreciate comments and opinions.

@cedrict
Copy link
Contributor

cedrict commented Oct 22, 2015

Hi Rene,
I am afraid I cannot comment much on the design/architecture of the code. What I thought we may wish to have a look at is an excerpt of my ELEFANT paper submitted some time ago (and still in Limbo). It concerns the averaging algorithm. There is also a bit on this topic in the pTatin paper (May et al., CMAME 290, 496-523, 2015 - section 3) which may prove useful in the context of second order elements.
Cheers,
C.

screen shot 2015-10-22 at 10 25 09
screen shot 2015-10-22 at 10 25 16
screen shot 2015-10-22 at 10 25 33

@bangerth
Copy link
Contributor

@gassmoeller -- an alternative to your original proposal would be that we add a bunch of additional compositional fields that are not advected, but computed at the end of every time step from the particles by one of the methods pointed out by Cedric. This would then allow us to output them quite easily, and all of the material model and other plugins would have immediate access to it.

@gassmoeller
Copy link
Member Author

@cedrict : Thanks for the tip. For now I mostly look into how to best structure the code, but when we start using it for benchmarks etc. this might be an interesting option to look into.
@bangerth : I like your suggestion. It would reduce the computational load (we never need to map the tracer properties to the elements more than once), but increase the memory footprint (we store the properties twice, once on the tracers, once on the mesh). But in particular it makes using the tracer properties much easier. I will look into it, when I have some time.

@bangerth
Copy link
Contributor

I think in practice the memory cost would be less than twice the original since we store the data in a much more efficient format, and because we may (likely will) have fewer DoFs than particles per cell.

We'd have to decide whether we want to project onto a continuous or discontinuous FE space.

@gassmoeller
Copy link
Member Author

Interesting question. My first impulse would be to say continuous to be as similar to the other CompositionalFields as possible (also for benchmarking purposes). However the projection might be much faster (but less accurate?), if we would only need to consider tracers of this particular cell and have discontinuous elements. Maybe @egpuckett or @cedrict have more experience with this? I know that finding the right approach, which tracers to consider for projection can be a problem performance- and accuracy-wise.

@egpuckett
Copy link
Contributor

Hi All,

I think I have a slightly different perspective and background on mapping values at particles to grid points, or FEM quadrature points, although admittedly all of the work I have been involved in and am familiar with is with Finite Difference Methods (FDM). I would like to suggest we take a look at several different lines of research. The first is vortex methods, which is the area in which I wrote my PhD thesis. In this field there is a highly developed method for accurately mapping a property (e.g., vorticity) onto any point x at within a certain radius, delta, of x_i, the position of the ith particle. In short, the idea is to think of the particle as a point source of the property; e.g., a vortex which is represented as a delta function with a given 'strength' of vorticity, namely the circulation about the vortex. One obtains a localized function (usually the velocity) from this point source by convolving it with a kernel (sort of a localized shape function). There is a very extensive literature on high-order accurate kernels, even eighth order and beyond.

Charlie Peskin's work on modeling the heart is based on this idea, although in his work the particle property is a mechanism for imparting tension between the particles, which are following the heart membrane. (Don't hold me to details here.) My colleague Bob Guy has just written a paper with one of his students in which they develop and use high order kernels of this type. I was impressed by this work and the accuracy of kernels they developed.

Another related line I am familiar with are various methods, such as the Method of Local Corrections and the Fast Multipole Method, which solve n-body problems by passing (interpolating) the potential for the force due to gravity, say, from many particles in a cell, onto the grid points in a clever manner that turns an O(N^2) algorithm into an O(n log n) or a an O(n) algorithm. I'm not suggesting we want to model the long range interactions that appear in the n-body problem, but we might find the method these mathematicians use to interpolate from the particles to the grid points and back of interest.

This brings up another thought. Are any of the properties we're interested in modeling a force, or other vector field, that arises as the gradient (or the perpendicular gradient as in a stream function)? Interpolating potential functions from particles to nodal points has a rich literature.

The third line of research concerns so-called marker-in-cell methods.I am not all that familiar with this field, except perhaps the original MAC paper, and that only because it gave rise to what is now called the MAC stencil. I wonder what the most accurate and up-to-date techniques for interpolation between particles and points that are currently used in marker-in-cell methods.

Finally I'm concerned that most the ideas I've heard so far in this discussiononly involve some sort of averaging (no offence intended Cedric!), which I doubt will be as accurate as the underlying FEM solution of the flow field.

Sorry for the lengthy post. I'll investigate these ideas further and send email to some of the individuals I know who have worked on them with questions this week.

@bangerth
Copy link
Contributor

My suggestion wasn't actually an averaging operation, but a projection operation.

There are of course other ways to get data from particles to quadrature points. Radial Basis Function (RBF) methods also come to mind.

@gassmoeller
Copy link
Member Author

I will close this issue for now, because we more or less settled on an approach for the code structure, and this issue seems to generic to provide further improvements. Thanks for all the input! Specific improvements for projection/averaging operations are of course always welcome.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants