You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/html/src/externalsoftware.html
+11-1Lines changed: 11 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,17 @@
1
1
<h1>External Packages</h1>
2
2
<code>libMesh</code> interfaces to a number of high-quality software packages to provide certain functionality. This page provides a list of packages and a description of their use in the library.
3
3
4
-
<h2>MPI</h2> The <ahref="http://www-unix.mcs.anl.gov/mpi">Message Passing Interface</a> is a standard for parallel programming using the message passing model. PETSc requires MPI for its functionality. <code>libMesh</code> makes use of MPI to when running in parallel for certain operations, including its <code>ParallelMesh</code> distributed-memory, fully unstructured mesh implementation.
4
+
<h2>MPI</h2> The <ahref="http://www-unix.mcs.anl.gov/mpi">Message
5
+
Passing Interface</a> is a standard for parallel programming using
6
+
the message passing model. PETSc and Trilinos require MPI for
7
+
distributed-memory parallel functionality.
8
+
<code>libMesh</code> can make use of MPI for certain operations on
9
+
distributed-memory (as well as shared-memory and hybrid) parallel
10
+
computers, enabling its <code>DistributedMesh</code>
<code>libMesh</code> currently only supports MPI implementations
14
+
compatible with the MPI-2 and later MPI standards.
5
15
6
16
<h2>TBB</h2> Since February 2008 <code>libMesh</code> can be configured to use the <ahref="http://threadingbuildingblocks.org/">Threading Building Blocks</a> for thread-based parallelism on shared memory machines. Several key algorithms in the library have been refactored to be multithreaded, and this effort will continue as additional profiling reveals additional serial bottlenecks. It is envisioned that for certain classes of problems multilevel parallelism (e.g. message passing between nodes and threading within nodes) will prove more scalable than message passing alone, especially with the introduction of commodity multi-core processors. The reality is that for implicit problems this can only be achieved with a parallel linear algebra library that also uses multilevel parallelism.
0 commit comments