PAALS Tutorial 2016

Glen Hansen edited this page May 24, 2018 · 4 revisions

Presentation Slides and Video

ATPESC 2016 Unstructured Mesh Technologies Slides and Video.

Exercise 1 - Partitioning

Partition of 100,000 element mesh of Greenland to 16 parts using ParMA and Zoltan's interface to ParMETIS (left) and recursive inertial bi-section (right). In each image the initial Zoltan partition is shown on the left and the ParMA partition on the right.
  • Demonstrates
  • Source code
    • /projects/FASTMath/ATPESC-2016/installs/pumi/src/test/
  • Executing the example
    • Execution time: ~5 mins
    • Number of cores: 16
    • Number of nodes: 1
    • Setup
    mkdir $HOME/paals
    cd $HOME/paals
    cp -r /projects/FASTMath/ATPESC-2016/examples/paals/ex1 .
    cd ex1
    • Run the partitioning tools and write mesh (*smb) files.
      • The arguments are model inputMesh outputMesh partitionFactor partitionMethod partitionApproach doLocalPartitioning
    qsub -A ATPESC2016 -q training -O ptnParma-pmetis -n 1 --mode c16 --proccount 16 -t 10 ./ptnParma greenland-4km-tri-bdry.dmg greenland-4km-tri-bdry-2D.smb bz2:16pmetis/ 16 pmetis reptn 0
  • Examining results
    • Compare ptnParma-pmetis.output to out/ptnParma-pmetis.output.
    • Partition quality statistics are output for the initial, ParMETIS graph, and final ParMA partition; respectively marked with 'initial', 'afterSplit', and 'final'. After ParMETIS the vertex imbalance (the first value on the line entity imbalance <v e f r>) is 8%(1.08) and the face imbalance is 4%(1.04) (the second to last value on the entity imbalance <v e f r> line). After ParMA improvement entity imbalances are at most 5%(1.05), and the total number of vertices in the mesh (the first value on the line weighted vtx <tot max min avg>) is increased by less than one percent.
    • Bonus question ( a cookie will be mailed to your home address if you can answer this... maybe ). How can the total number of vertices change after partitioning? Hint: think about the common/shared boundary of the parts.
    • Optional - Run ptnParma with the partitionMethod set to 'rib' and examine the results: qsub -A ATPESC2016 -q training -O ptnParma-rib -n 1 --mode c16 --proccount 16 -t 10 ./ptnParma greenland-4km-tri-bdry.dmg greenland-4km-tri-bdry-2D.smb bz2:16rib/ 16 rib reptn 0. How does the partition quality compare to ParMETIS? How does the total number of vertices compare?
    • Optional - Split the 16 part mesh with ptnParma to 32 parts using different partitionApproach settings and examine the results. The available settings are described here. For example, ask ParMETIS to emphasize cut (communication) reduction via the refine approach: qsub -A ATPESC2016 -q training -O ptnParma-16-32 -n 2 --mode c16 --proccount 32 -t 10 ./ptnParma greenland-4km-tri-bdry.dmg bz2:16pmetis/ bz2:32pmetis-refine/ 2 pmetis refine 0

Exercise 2 - Optional - Mesh Visualization

  • Download ParaView
    • Download ParaView
    • Transfer the final-pmetis directory to your machine
    • Load the mesh
      • Select the folder icon in the top left then browse to the final-pmetis directory and select the final-pmetis.pvtu file.
      • Click 'Apply' to render.
      • Select 'Surface With Edges' as shown.
        Enabling edge rendering in Paraview.
      • Select 'apf_parts' to color the mesh elements by their part id.
        Enabling part id coloring in Paraview.
      • Click 'Apply' to render.

The above steps can be repeated for the final-rib partition.

Exercise 3 - Adaptive Ice Sheet Flow

Magnitude of ice velocity. Left: 3D rendering of Greenland ice sheet with vertical coordinate scaled by a factor of 70. Right: top surface of Greenland ice sheet showing the mesh.
Left: ice temperature at the top surface. Right: basal friction field at the ice sheet bed.
  • Background: The above figure shows the magnitude of the ice velocity of the Greenland ice sheet. Accuracy in the calculated ice velocity is essential to determine ice sheet dynamics. In particular, we need to accurately compute the velocity in ice streams because they significantly impact how much ice is discharged into the ocean.

  • Question: Is the mesh fine enough to accurately model high velocities? Where do you expect that the mesh should be refined to accurately simulate the velocity?

  • Problem Description

    • Ice sheet geometry
    • Boundary conditions:
      • Stress-free condition at top and lateral surface.
      • At the bed of the ice sheet, the tensorial shear stress equals the basal friction field times the ice velocity.
    • Non-Newtonian rheology
      • Ice viscosity depends non-linearly on the temperature.
      • Ice Density: 910 kg/m3
      • Gravity: 9.81 m/s2
      • The temperature field is given. Temperature at the top surface is shown in the figure above. The temperature at the bottom of the ice is approximately 273 K.
    • Solver: Newton's method (NOX) is used for addressing the nonlinearities, while the resulting linear systems are solved with GMRES preconditioned by additive Schwartz employing ILUT factorization on each processor.
  • Demonstrates

  • Example data files

    • Executable: /projects/FASTMath/ATPESC-2016/installs/albany/bin/AlbanyT
    • Input files: /projects/FASTMath/ATPESC-2016/examples/paals/ex3/mesh.tar
    • Output files: /projects/FASTMath/ATPESC-2016/examples/paals/ex3/out/
  • Executing the example

    • Execution time: less than 5 mins.
    • Wall clock CPU time: about 128 sec.
    • Number of cores: 32
    • Number of nodes: 2
  • Setup:

mkdir $HOME/paals/ex3
cd $HOME/paals
cp /projects/FASTMath/ATPESC-2016/examples/paals/ex3/mesh.tar ex3
ln -s /projects/FASTMath/ATPESC-2016/installs/albany/bin/AlbanyT ex3
cd ex3
tar xf mesh.tar
  • Run PAALS
qsub -A ATPESC2016 -q training -O albany -n 2 --mode c16 -t 10 ./AlbanyT input_greenland.xml
  • Examining results
    • Note: sample output files created by a test run of Albany are available at /projects/FASTMath/ATPESC-2016/examples/paals/ex3/.
    • The files albany.error and albany.output contain the stderr and stdout screen output of the analysis run. The stdout contains information about the performance of the solver and preconditioner that can be used to fine tune Albany options to improve performance.
    • The file albany.cobaltlog contains the job informational output from the Vesta queueing sytem.
    • Please copy the directories greenland_1, greenland_2, and greenland_3, as well as the file greenland.pvd to your local machine to visualize the solution results. Once accomplished, use paraview to open greenland.pvd and then visualize the results and compare with the images below.
Detail of ice velocity magnitude in the Ryder glacier (North coast). Left: before mesh adaptation. Right: after mesh adaptation. The bottom images show the meshed ice sheet.
Detail of ice velocity magnitude in the Nioghalvfjerdsfjorden ice stream (East coast). Left: before mesh adaptation. Right: after mesh adaptation. The bottom images show the meshed ice sheet.
  • The above shows a comparison between the solution and the mesh before and after adaptation of two ice streams, the Ryder glacier and the Nioghalvfjerdsfjorden stream. Although the differences in the solution are minor for this simple steady problem, one may note that the ice streams are more defined in the refined mesh. Also, it is clear that the mesh is refined near the regions with high ice velocity, corresponding roughly to the regions with the highest gradient of the velocity.
  • Question: What happens if you try to modify the "Error Bound" value in the "Adaptation" sub-list of the input_greenland.xml file? Please try values between 0.01 and 0.5.

Additional References

  • SCOREC tools

    • M. Zhou, O. Sahni, T. Xie, M.S. Shephard and K.E. Jansen, Unstructured Mesh Partition Improvement for Implicit Finite Element at Extreme Scale, Journal of Supercomputing, 59(3): 1218-1228, 2012. DOI 10.1007s11227-010-0521-0
    • M. Zhou, T. Xie, S. Seol, M.S. Shephard, O. Sahni and K.E. Jansen, Tools to Support Mesh Adaptation on Massively Parallel Computers, Engineering with Computers, 28(3):287-301, 2012. DOI: 10.1007s00366-011-0218-x
    • M. Zhou, O. Sahni, M.S. Shephard, K.D. Devine and K.E. Jansen, Controlling unstructured mesh partitions for massively parallel simulations, SIAM J. Sci. Comp., 32(6):3201-3227, 2010. DOI: 10.1137090777323
    • M. Zhou, O. Sahni, H.J. Kim, C.A. Figueroa, C.A. Taylor, M.S. Shephard, and K.E. Jansen, Cardiovascular Flow Simulation at Extreme Scale, Computational Mechanics, 46:71-82, 2010. DOI: 10.1007s00466-009-0450-z
  • Mesh data and geometry interactions

    • D. Ibanez, E. Seol, C. Smith, M. S. Shephard, PUMI: Parallel Unstructured Mesh Infrastructure, ACM Transactions on Mathematical Software, 42, 3, Article 17 (May 2016), 28 pages. DOI:
    • Seol, E.S. and Shephard, M.S., Efficient distributed mesh data structure for parallel automated adaptive analysis, Engineering with Computers, 22(3-4):197-213, 2006, DOI: 10.1007s00366-006-0048-4
    • Beall, M.W., Walsh, J. and Shephard, M.S, A comparison of techniques for geometry access related to mesh generation, Engineering with Computers, 20(3):210-221, 2004, DOI: 10.1007s00366-004-0289-z.
    • Beall, M.W. and Shephard, M.S., A general topology-based mesh data structure, Int. J. Numer. Meth. Engng., 40(9):1573-1596, 1997, DOI: 10.1002(SICI)1097-0207(19970515)40:9<1573::AID-NME128>3.0.CO;2-9.
  • Adaptivity

    • Aleksandr Ovcharenko, Parallel Anisotropic Mesh Adaptation with Boundary Layers, Ph.D. Dissertation, RPI, 2012,
    • Q. Lu, M.S. Shephard, S. Tendulkar and M.W. Beall, Parallel Curved Mesh Adaptation for Large Scale High-Order Finite Element Simulations, Proc. 21 Roundtable, Springer, NY, pp. 419-436, 2012, DOI 10.1007978-3-642-33573-0.
    • A. Ovcharenko, K. Chitale, O. Sahni, K.E. Jansen and M.S. Shephard, S. Tendulkar and M.W. Beall, Parallel Adaptive Boundary Layer Meshing for CFD Analysis, Proc. 21st International Meshing Roundtable, Springer, NY, pp. 437-455, 2012, DOI 10.1007978-3-642-33573-0
    • X.-J. Luo, M.S. Shephard, L.-Q. Lee and C. Ng, Moving Curved Mesh Adaption for Higher Order Finite Element Simulations, Engineering with Computers, 27(1):41-50, 2011. DOI: 10.1007/s00366-010-0179-5
    • O. Sahni, X.J. Luo, K.E. Jansen, M.S. Shephard, Curved Boundary Layer Meshing for Adaptive Viscous Flow Simulations, Finite Elements in Analysis and Design, 46:132-139, 2010. DOI: 10.1007/s00366-008-0095-0
    • Alauzet, F., Li, X., Seol, E.S. and Shephard, M.S., Parallel Anisotropic 3D Mesh Adaptation by Mesh Modification, Engineering with Computers, 21(3):247-258, 2006, DOI: 10.1007s00366-005-0009-3
    • Li, X., Shephard, M.S. and Beall, M.W., 3-D Anisotropic Mesh Adaptation by Mesh Modifications, Comp. Meth. Appl. Mech. Engng., 194(48-49):4915-4950, 2005, doi:10.1016/j.cma.2004.11.019
    • Li, X., Shephard, M.S. and Beall, M.W., Accounting for curved domains in mesh adaptation, International Journal for Numerical Methods in Engineering, 58:246-276, 2003, DOI: 10.1002/nme.772
  • Albany

    • A. G. Salinger, R. A. Bartlett, A. M. Bradley, Q. Chen, I. P. Demeshko, X. Gao, G. A. Hansen, A. M. Mota, R. P. Muller, E. Nielson, J. T. Ostien, R. P. Pawlowski, M. Perego, E. T. Phipps, W. Sun, I. K. Tezaur. Albany: Using Agile Components to Develop a Flexible, Generic Multiphysics Analysis Code. Int. J. Multiscale Comput. Engng., 14(4):414-438, 2016,
    • I. K. Tezaur, M. Perego, A. G. Salinger, R. S. Tuminaro and S. Price. Albany/FELIX: A Parallel, Scalable and Robust Finite Element Higher-Order Stokes Ice Sheet Solver Built for Advanced Analysis. Geosci. Model. Develop, 8:1-24, 2015.
    • M. Gee, C. Siefert, J. Hu, R. Tuminaro, and M. Sala. ML 5.0 Smoothed Aggregation Users Guide. Technical Report SAND2006-2649, Sandia National Laboratories, 2006.
    • Qiushi Chen, Jakob T. Ostien, Glen Hansen. Development of a Used Fuel Cladding Damage Model Incorporating Circumferential and Radial Hydride Responses. Journal of Nuclear Materials, 447(1-3):292-303, 2014.
    • Michael A. Heroux, Roscoe A. Bartlett, Vicki E. Howle, Robert J. Hoekstra, Jonathan J. Hu, Tamara G. Kolda, Richard B. Lehoucq, Kevin R. Long, Roger P. Pawlowski, Eric T. Phipps, Andrew G. Salinger, Heidi K. Thornquist, Ray S. Tuminaro, James M. Willenbring, Alan Williams, and Kendall S. Stanley. An Overview of the Trilinos Package. ACM Trans. Math. Softw., 31(3):397–423, 2005.
    • Roger P. Pawlowski, Eric T. Phipps, and Andrew G. Salinger. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part I: Template-based Generic Programming. Scientific Programming, 20(2):197–219, 2012.
    • Roger P. Pawlowski, Eric T. Phipps, Andrew G. Salinger, Steven J. Owen, Christopher M. Siefert, and Matthew L. Staten. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part II: Application to Partial Differential Equations. Scientific Programming, 20(3):327–345, 2012.
    • Eric Phipps. A Path Forward to Embedded Sensitivity Analysis, Uncertainty Quantification and Optimization.
    • Eric Phipps and Roger Pawlowski. Efficient Expression Templates for Operator Overloading-based Automatic Differentiation. Preprint, 2012.
    • Eric Phipps, H. Carter Edwards, Jonathan Hu, and Jakob T. Ostien. Exploring Emerging Manycore Architectures for Uncertainty Quantification through Embedded Stochastic Galerkin Methods. International Journal of Computer Mathematics, 91(4):707-729, 2014.
    • A. Salinger et al. Albany website.
Clone this wiki locally
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.