Skip to content

PETSc examples: assemble matrix analytically instead of coloring#807

Merged
jeremylt merged 5 commits intomainfrom
jed/petsc-assemble-coo
Feb 1, 2022
Merged

PETSc examples: assemble matrix analytically instead of coloring#807
jeremylt merged 5 commits intomainfrom
jed/petsc-assemble-coo

Conversation

@jedbrown
Copy link
Copy Markdown
Member

@jedbrown jedbrown commented Sep 9, 2021

This currently requires development PETSc to correctly handle negative indices through the LocalToGlobalMapping.

Comment thread examples/petsc/multigrid.c Outdated
Comment thread examples/petsc/multigrid.c Outdated
@jeremylt
Copy link
Copy Markdown
Member

I think my change to the solids example should be ok, but there is something strange in your PETSc branch:

# +[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
# +[0]PETSC ERROR:
# +[0]PETSC ERROR: Invalid row index -6! Must be in [0,24)
# +[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
# +[0]PETSC ERROR: Petsc Development GIT revision: v3.15.4-739-g1701aa0a1f  GIT Date: 2021-09-09 14:14:31 -0600
# +[0]PETSC ERROR: build/solids-elasticity on a arch-linux-c-debug named sinensis by jeremy Tue Sep 14 14:28:25 2021
# +[0]PETSC ERROR: Configure options --download-exodusii --download-hdf5 --download-hypre --download-metis --download-ml --download-netcdf --download-parmetis --download-pnetcdf --download-zlib --download-openmpi --download-fblaslapack
# +[0]PETSC ERROR: #1 MatSetPreallocationCOO() at /home/jeremy/Dev/petsc_dev/src/mat/utils/gcreate.c:562
# +[0]PETSC ERROR: #2 main() at /home/jeremy/Dev/libCEED/examples/solids/elasticity.c:533

I think it should be a quick fix though

@jedbrown
Copy link
Copy Markdown
Member Author

Hmm, I fixed your uninitialized variable (pushed) and it's running:

 build/solids-elasticity -E 10 -nu .3 -degree 3 -mesh ~/meshes/holes.msh -problem FSCurrent-NH2 -num_steps 3 -bc_clamp 1 -bc_traction 2 -bc_traction_2 0,1,0 -snes_monitor

-- Elasticity Example - libCEED + PETSc --
  MPI:
    Hostname                           : kichatna
    Total ranks                        : 1
  libCEED:
    libCEED Backend                    : /cpu/self/xsmm/blocked
    libCEED Backend MemType            : host
  PETSc:
    PETSc Vec Type                     : seq
  Problem:
    Problem Name                       : Hyperelasticity finite strain Current configuration Neo-Hookean w/ dXref_dxcurr, tau, constant storage
    Forcing Function                   : None
  Mesh:
    File                               : /home/jed/meshes/holes.msh
    Number of 1D Basis Nodes (p)       : 4
    Number of 1D Quadrature Points (q) : 4
    Global nodes                       : 85530
    Owned nodes                        : 85530
    DoF per node                       : 3
  Multigrid:
    Type                               : P-multigrid, logarithmic coarsening
    Number of Levels                   : 3
    Level 0 (coarse):
      Number of 1D Basis Nodes (p)     : 2
      Global Nodes                     : 3710
      Owned Nodes                      : 3710
    Level 2 (fine):
      Number of 1D Basis Nodes (p)     : 4
      Global Nodes                     : 85530
      Owned Nodes                      : 85530
0 Load Increment
  0 SNES Function norm 2.999573779527e-02
  1 SNES Function norm 3.001944084220e-02
  2 SNES Function norm 1.149946571373e-03
  3 SNES Function norm 4.848972179332e-05
[...]

@jeremylt
Copy link
Copy Markdown
Member

Still not working on my local machine. What do you get with make prove search=sol -j? I don't have the mesh you are using on my current machine.

@jedbrown
Copy link
Copy Markdown
Member Author

$ make prove search=sol
make: 'lib' with optional backends: /cpu/self/memcheck/serial /cpu/self/memcheck/blocked /cpu/self/avx/serial /cpu/self/avx/blocked /cpu/self/xsmm/serial /cpu/self/xsmm/blocked /gpu/cuda/ref /gpu/cuda/shared /gpu/cuda/gen
Testing backends: /cpu/self/ref/serial /cpu/self/ref/blocked /cpu/self/opt/serial /cpu/self/opt/blocked /cpu/self/memcheck/serial /cpu/self/memcheck/blocked /cpu/self/avx/serial /cpu/self/avx/blocked /cpu/self/xsmm/serial /cpu/self/xsmm/blocked /gpu/cuda/ref /gpu/cuda/shared /gpu/cuda/gen
prove -j 16 --exec 'tests/tap.sh' solids-elasticity
solids-elasticity .. ok
All tests successful.
Files=1, Tests=117, 118 wallclock secs ( 0.07 usr  0.00 sys + 106.64 cusr  8.75 csys = 115.46 CPU)
Result: PASS

@jeremylt
Copy link
Copy Markdown
Member

Strange. I'm on jed/plex-mat-coo for PETSc.

@jedbrown
Copy link
Copy Markdown
Member Author

Very nice on the CPU profile now, thanks.

$ mpiexec -n 32 build/solids-elasticity -E 10 -nu .3 -degree 3 -mesh ~/meshes/holes.msh -problem FSCurrent-NH2 -num_steps 3 -bc_clamp 1 -bc_traction 2 -bc_traction_2 0,1,0 -snes_monitor -log_view :petsc.flame:ascii_flamegraph

image

@jeremylt jeremylt added this to the v0.10 milestone Sep 22, 2021
@jedbrown jedbrown force-pushed the jed/petsc-assemble-coo branch from fb87e72 to 28c91eb Compare November 4, 2021 22:47
@jedbrown jedbrown force-pushed the jed/petsc-assemble-coo branch 2 times, most recently from f1a5fd6 to dd898e9 Compare January 21, 2022 04:57
Copy link
Copy Markdown
Member

@jeremylt jeremylt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Passes locally when rebased with main.

Comment thread examples/petsc/multigrid.c Outdated
@jeremylt jeremylt force-pushed the jed/petsc-assemble-coo branch 3 times, most recently from d07a1cf to cffe6a5 Compare February 1, 2022 16:31
@jeremylt jeremylt changed the title Draft: PETSc examples: assemble matrix analytically instead of coloring PETSc examples: assemble matrix analytically instead of coloring Feb 1, 2022
@jeremylt jeremylt merged commit 36e4def into main Feb 1, 2022
@jeremylt jeremylt deleted the jed/petsc-assemble-coo branch February 1, 2022 16:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants