Skip to content

Frontier Hackathon 2023

Liwei Ji edited this page Oct 6, 2023 · 32 revisions

2023 Frontier Hackathon

The Hackathon is described here.

Timeline

  • Thursday June 22, 11:00 EST: Brief meeting to define expectations, tasks; also to help everyone get started compiling and running, and understand the machine.

Zoom link here.

  • Friday June 23, 13:00 to 14:00 EST: Preparation session by OLCF. Not everybody needs to attend, but those not attending will need to catch up later.

Zoom link here.

  • Wednesday June 28 - Friday June 30, 11:00 EST to 17:00 EST: Workshop.

Zoom link here.

CarpetX Participants

  • Steve Brandt

  • Lorenzo Ennoggi

  • Roland Haas

  • Liwei Ji

  • Jay Kalinani

  • Lucas Timotheo Sanches

  • Erik Schnetter

  • John Holmen: OLCF POC

  • Weiqun Zhang: AMReX support

Communication channel:

Slack Workspace

Please join the Frontier Hackathon - June 2023 Slack workspace using the following link: https://join.slack.com/t/frontier-hack-2023-06/shared_invite/zt-1wwlh59sg-1XgbThZmmkTnxByejw0aoQ

After joining the workspace, please search for and join the #team-asterx channel.

Compiling and running on Crusher

Erik's Simfactory settings for Crusher are here. (Erik is busy this week and doesn't have time to clean this up – these files should be moved into a proper repository, possibly Simfactory itself, after some testing by others.)

  • Create ET folder in the home directory:

    cd ~/
    mkdir ET
    cd ET
    
  • Download the code via the following commands:

    curl -kLO https://raw.githubusercontent.com/gridaphobe/CRL/master/GetComponents
    chmod a+x GetComponents
    ./GetComponents --root Cactus --parallel --no-shallow https://raw.githubusercontent.com/jaykalinani/AsterX/frontier/Docs/thornlist/asterx_frontier.th
    
    • Note that you need to use the eschnett/crusher branch of the flesh (the above thornlist already take count of this, but you still need to run the following command),

      • you also need run git merge master locally to make it work with current version of CarpetX.
    • For ExternalLibraries-ADIOS2, we have to use commit c1d6397 for the moment.

  • Simfactory files for Frontier are available in the folder: Cactus/repos/AsterX/Docs/compile-notes/frontier. (There is a PR to simfactory)

    • Copy frontier.ini to Cactus/simfactory/mdb/machines/.

    • Copy frontier.cfg to Cactus/simfactory/mdb/optionlists/.

    • Copy frontier.run to Cactus/simfactory/mdb/runscripts/.

    • Copy frontier.sub to Cactus/simfactory/mdb/submitscripts/.

    • Copy defs.local.ini to Cactus/simfactory/etc/., and edit user account details, source and base directory paths accordingly.

  • Return to Cactus directory and compile using the following command:

    ./simfactory/bin/sim build -j16 <config_name> --thornlist=./thornlists/asterx_frontier.th --machine=frontier
    
  • Example command to create-submit a job for a shocktube test via simfactory

    ./simfactory/bin/sim submit B1 --parfile=./arrangements/AsterX/AsterX/test/Balsara1_shocktube.par --config=<config_name> --machine=frontier --allocation=ast182 --procs=64 --num-threads=1 --ppn-used=8 --queue=batch --walltime 00:01:00
    
  • For a magnetized TOV test evolving spacetime, example submit command via simfactory

    ./simfactory/bin/sim submit magTOV_unigrid --parfile=./arrangements/AsterX/AsterX/par/magTOV_unigrid_frontier.par --config=<config_name> --machine=frontier --allocation=AST182 --procs=64 --num-threads=1 --ppn-used=8 --queue=batch --walltime 00:03:00
    

Pre-workshop meeting notes

Quick links

Tasks

Profiling

Jay

Debugging

To-do list

  • Compile newest version of AMReX (23.06)
  • Use -munsafe-fp-atomics
  • Test ROCm 5.2 Did not see any performance difference.

HPCToolkit

  • Cheatsheet here
  • Getting started on Frontier here
  • To output measurements, in the runscript, prefix the call to the executable with:
    hpcrun                                      \
    -o hpctoolkit-measurements                  \
    -e CPUTIME                                  \
    -e gpu=amd                                  \
    -t                                          \

Enable GPU-aware MPI

Add to optionlist

MPI_LIB_DIRS = /opt/cray/pe/mpich/8.1.23/ofi/crayclang/10.0/lib /opt/cray/xpmem/2.5.2-2.4_3.45__gd0f7936.shasta/lib64 /opt/cray/pe/mpich/8.1.23/gtl/lib
MPI_LIBS = mpi xpmem mpi_gtl_hsa

Path to AMReX library built with enabled GPU-aware MPI

/lustre/orion/csc308/world-shared/amrex-rocm5.3-gpumpi