Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MFEM Memory Efficiency #3924

Closed
ShaniMW opened this issue Oct 11, 2023 · 2 comments
Closed

MFEM Memory Efficiency #3924

ShaniMW opened this issue Oct 11, 2023 · 2 comments

Comments

@ShaniMW
Copy link

ShaniMW commented Oct 11, 2023

Hi all,

I am currently utilizing the MFEM ex2p code to solve a linear elasticity problem with a significant number of degrees of freedom (DOFs). I am keen to understand the processes that contribute to MFEM's memory efficiency, especially when dealing with a large number of unknowns.

From my analysis of the default ex2p code, the linear elastic system's stiffness matrix is stored as a sparse matrix. The system is then solved using the iterative Hyper PCG solver, complemented by the AMG preconditioner, which facilitates a more streamlined solution process.

I have a few specific inquiries:

  1. Is the primary source of MFEM's memory efficiency attributed to the storage of the stiffness matrix in sparse format?
  2. I noticed an option to enable a specialized version of AMG tailored for elasticity problems. While I understand this might influence the solution time, does it have any implications for memory consumption?
  3. I'm aware that static condensation can lead to a reduced system size. However, its efficiency seems to be optimized for problems with an order greater than one. Given that I am employing linear HEX elements, would this approach be beneficial?
  4. Additionally, I understand that MPI allows for domain decomposition. How does this feature contribute to MFEM's efficiency in handling large-scale problems?
    I am particularly interested in understanding the factors that make MFEM memory efficienct at solving extensive linear systems with around 1.5M DOFs.

Thanks,
Shani

@jandrej
Copy link
Member

jandrej commented Oct 11, 2023

Thank you for your interest in MFEM. All your questions are more or less "general" and not solely related to MFEM. I would recommend looking into literature that explain those topics.

Our friends at deal.II offer a comprehensive tutorial that explains the steps for parallel and memory efficiency.

A very compact overview of those techniques can also be reviewed from Jed Browns PETSc tutorial slides

@ShaniMW
Copy link
Author

ShaniMW commented Oct 12, 2023

Thanks, I started with deal.II tutorial lectures. I must say they are presented with utmost clarity, making complex concepts easy to grasp.
Thanks for the recommendation
Shani

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants