Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ArrayKernel and ArrayBoundaryCondition #6881

Closed
YaqiWang opened this issue May 1, 2016 · 17 comments · Fixed by #13528
Closed

ArrayKernel and ArrayBoundaryCondition #6881

YaqiWang opened this issue May 1, 2016 · 17 comments · Fixed by #13528
Assignees
Labels
C: Framework P: normal A defect affecting operation with a low possibility of significantly affects. T: task An enhancement to the software.

Comments

@YaqiWang
Copy link
Contributor

YaqiWang commented May 1, 2016

Description of the enhancement or error report

Currently Kernel or BoundaryCondition are designed for an individual variable which is indicated by their variable parameter. In radiation transport, the extra independent variables of energy and streaming direction can make the number of variables quite large (potentially above 100K), which will results into too many kernels and boundary conditions being added. These kernels and bcs could contribute a huge overhead of memory consumption. If we can make a kernel and a bc operate on a vector of variables simultaneously, we can reduce the number of them substantially.

Rationale for the enhancement or information for reproducing the error

This is needed for simulations with huge number of variables.

Identified impact

(i.e. Internal object changes, limited interface changes, public API change, or a list of specific applications impacted)
Radiation transport can benefit from this capability. Few questions I have in mind at this moment:

  1. Should this VectorKernel be an independent system or derived from Kernel? Similarly for VectorBoundaryCondition.
  2. What else implications we need to consider for simulations with huge number of variables? For instance, should variable itself be vectored? Will Jacobian assembly be an issue?
@permcody permcody added C: Framework T: task An enhancement to the software. P: normal A defect affecting operation with a low possibility of significantly affects. labels May 2, 2016
@lw4992
Copy link

lw4992 commented May 5, 2016

Action can give a solution for this issue, if native moose support would be better. #3719

@permcody
Copy link
Member

permcody commented May 5, 2016

I'm not sure if I fully understood what you were after for #3719. You mentioned in your original description about duplicate code but that was never an issue. Now it is true that without the Action system, you could have a lot of repetition in your input file.

This particular PR is for creating a single MooseObject that works on a whole array of different variables. This situation shows up in neutronics a lot and chemical reaction networks.

@YaqiWang
Copy link
Contributor Author

YaqiWang commented May 6, 2016

I am here talking about kernels more than 100k. If your number of kernels is less than that and bigger than the number with which your input becomes too tedious and error prone with moose simple input syntax, action is the way to go.

@friedmud
Copy link
Contributor

friedmud commented May 7, 2016

I implemented something like this for my recent MOC work. My MOCKernel objects apply to all energy groups simultaneously. I capitalize on the "variable groups" capability in libMesh to only do global->local mapping for the first variable in a variable group... then I do direct indexing into the PETSc vector using that local dof + a group offset (going directly to the C array from VecGetArray()).

It's all insanely fast... and I would love to get something like it into MOOSE for normal Kernels (and enable things like VectorKernels). Getting the interface correct is the hard part.

This is the perfect thing to work on during the tiger team. I can show you what I've done and we can hammer out an API and get this implemented.

@YaqiWang
Copy link
Contributor Author

YaqiWang commented May 8, 2016

When will be our next tiger team? I cannot wait to have this done. Once this is ready, I will need some time to convert my SN kernels and try to run problems otherwise with million kernels.

@friedmud
Copy link
Contributor

friedmud commented May 9, 2016

@YaqiWang I still don't understand "millions" of Kernels. You should only have like 100 groups times 128 angles times maybe 5 or so... Which is like 60,000.

How many angles / groups are you trying to run?

@YaqiWang
Copy link
Contributor Author

YaqiWang commented May 9, 2016

Could be 300 hundred of groups, each with 300 angles, with 10 kernels on average for each variable, so the total is close to 1M. Here the numbers of groups and angles could be a little conservative.

@YaqiWang
Copy link
Contributor Author

@friedmud I'd like to work on this because we want demonstrate the capability of solving problems with large number of groups. Can you point to me where I can look at to start this?

@permcody
Copy link
Member

@jwpeterson is planning on working this. Let's chat about it tomorrow.
On Wed, Jun 15, 2016 at 5:06 PM Yaqi notifications@github.com wrote:

@friedmud https://github.com/friedmud I'd like to work on this because
we want demonstrate the capability of solving problems with large number of
groups. Can you point to me where I can look at to start this?


You are receiving this because you commented.

Reply to this email directly, view it on GitHub
#6881 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AC5XIDR7KWOY7KCUyB4wCse4ZL1ZUIyrks5qMIVrgaJpZM4ITrBp
.

@YaqiWang
Copy link
Contributor Author

That is fantastic! Feel free to grab me for the chat.

@friedmud
Copy link
Contributor

I want to table this work until the tiger team. This needs to be
designed... and then redesigned.

I would like to show you guys what I'm currently doing and have some
discussion.

Maybe we need a label for issues we want to work on during the tiger team?
On Thu, Jun 16, 2016 at 11:08 AM Yaqi notifications@github.com wrote:

That is fantastic! Feel free to grab me for the chat.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#6881 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AA1JMabvdEk97jIBo14neDb_FA8t8Ds0ks5qMWbdgaJpZM4ITrBp
.

@friedmud friedmud added this to the Tiger Team 2016 milestone Jun 28, 2016
@friedmud
Copy link
Contributor

Let me note something here: this capability is quite distinct from supporting "vector valued" finite-elements. You would think they're similar... but they're really not. vector valued finite-elements are quite a lot more complicated.

The capability I'm envisioning here is for applying to many variables of the same type at the same time.

@friedmud
Copy link
Contributor

Here's the way it works in my MOCKernels (more or less, with a bit of paraphrasing and leaving out details that aren't relevant here):

  1. A full residual vector is allocated for each thread
  2. Before the residual is computed the "local form" of the PetscVector is cached for each thread
  3. On each element Problem::reinit(Elem *) computes the first index of each variable group (I call it an offset)
  4. MOCKernels loop over the number of variables and contribute directly to the residual by doing: _residual_cache[group_offset + group_var_num] += stuff. Where group_var_num is the position the variable has within the variable group.
  5. After the element loop the thread's copies of the residual vector are summed into the true residual vector.

I think that with some tweaking this model could work well for VectorKernels as well.

@permcody permcody changed the title VectorKernel and VectorBoundaryCondition ArrayKernel and ArrayBoundaryCondition Aug 1, 2016
friedmud added a commit to friedmud/moose that referenced this issue Aug 7, 2016
friedmud added a commit to friedmud/moose that referenced this issue Aug 16, 2016
friedmud added a commit to friedmud/moose that referenced this issue Aug 18, 2016
friedmud added a commit to friedmud/moose that referenced this issue Aug 18, 2016
YaqiWang added a commit to YaqiWang/moose that referenced this issue May 5, 2019
YaqiWang added a commit to YaqiWang/moose that referenced this issue May 9, 2019
YaqiWang added a commit to YaqiWang/moose that referenced this issue May 9, 2019
YaqiWang added a commit to YaqiWang/moose that referenced this issue May 10, 2019
YaqiWang added a commit to YaqiWang/moose that referenced this issue May 10, 2019
permcody pushed a commit to permcody/moose that referenced this issue May 21, 2019
permcody pushed a commit to permcody/moose that referenced this issue May 21, 2019
permcody pushed a commit to permcody/moose that referenced this issue May 21, 2019
permcody pushed a commit to permcody/moose that referenced this issue May 21, 2019
YaqiWang added a commit to YaqiWang/moose that referenced this issue Jun 9, 2019
YaqiWang added a commit to YaqiWang/moose that referenced this issue Jun 9, 2019
YaqiWang added a commit to YaqiWang/moose that referenced this issue Jun 19, 2019
YaqiWang added a commit to YaqiWang/moose that referenced this issue Jun 19, 2019
YaqiWang added a commit to YaqiWang/moose that referenced this issue Jul 7, 2019
YaqiWang added a commit to YaqiWang/moose that referenced this issue Jul 7, 2019
YaqiWang added a commit to YaqiWang/moose that referenced this issue Jul 12, 2019
YaqiWang added a commit to YaqiWang/moose that referenced this issue Jul 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: Framework P: normal A defect affecting operation with a low possibility of significantly affects. T: task An enhancement to the software.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants