Skip to content

WeeklyTelcon_20200331

Geoffrey Paulsen edited this page Mar 31, 2020 · 1 revision

Open MPI Weekly Telecon ---

  • Dialup Info: (Do not post to public mailing list or public wiki)

Attendees (on Web-ex)

  • Geoffrey Paulsen (IBM)
  • Jeff Squyres (Cisco)
  • Austen Lauria (IBM)
  • Akshay Venkatesh (NVIDIA)
  • Brian Barrett (AWS)
  • Brendan Cunningham (Intel)
  • Edgar Gabriel (UH)
  • Erik Zeiske
  • George Bosilca (UTK)
  • Howard Pritchard (LANL)
  • Joseph Schuchart
  • Josh Hursey (IBM)
  • Joshua Ladd (Mellanox)
  • Matthew Dosanjh (Sandia)
  • Thomas Naughton (ORNL)
  • Noah Evans (Sandia)
  • Ralph Castain (Intel)
  • Scott Breyer (Sandia?)
  • William Zhang (AWS)

not there today (I keep this for easy cut-n-paste for future notes)

  • Geoffroy Vallee (ARM)
  • Harumi Kuno (HPE)
  • Michael Heinz (Intel)
  • Shintaro iwasaki
  • Todd Kordenbrock (Sandia)
  • David Bernhold (ORNL)
  • Artem Polyakov (Mellanox)
  • Nathan Hjelm (Google)
  • Charles Shereda (LLNL)
  • Brandon Yates (Intel)
  • Mark Allen (IBM)
  • Matias Cabral (Intel)
  • Xin Zhao (Mellanox)
  • mohan (AWS)

Old Business

  • MTT -

    • If you change your MTT to startup PRRTE at begining of session, and just use prun.
    • Can see times cut in half or more.
    • This is good, but also need to test mpirun wrapper.
    • Cisco is converting half of MPI installs to use prrte/prun
  • OMPI master submodule pointers setup to track PMIx and PRRTE master.

    • Jeff discussed an idea to have some integration with PRRTE that putting a string in a PRRTE PR would automatically open an Open-MPI PR to update the PRRTE submodule after that PRRTE PR is merged to PRRTE master.

Release Branches

Review v3.0.x

Review v3.1.x

  • v3.0.6 and v3.1.6 are hopefully the last on those branches.
  • Removing from weekly meetings.
  • Advise users to move to v4.0.x

Review v4.0.x Milestones v4.0.4

  • v4.0.4 in the works.
  • Ralph PRed RSH scaling PR 7581
    • Thomas Naughten signed off on.
  • PR 7579 - UCX PML
    • UCX OSHMEM - Josh Ladd signed up to have someone review.
  • PMIx-v3.1.x - Update 3/23: Until we hear a problem, we won't backport a PMIx PR or ship another PMIx v3.1.x

v5.0.0

  • Schedule:

    • Feature Freeze: April 30
    • Release: End of June
  • Austen took an initial stab at issues and is starting a google sheets of v5.0 features.

  • Updated status in above google sheets.

  • PMIx v4.0.0 - on track

  • PRRTE v2.0 - on track

  • Remove OSC pt2pt - need some work from PSM2/OFI before we can remove. - at risk

  • Discussed Multithreaded Framework

    • Concerns about some non-posix implementations and MPI progress in general.
    • see https://github.com/open-mpi/ompi/pull/6578
    • Consensus that we want the framework / reorganization (using pthread as default)
      • Will address a few other PR specific issues before merging.
      • Greater progress issues in various components can be discussed in the future.
  • Issues not tracked on spreadsheet.

    • Some of the PMIx / PRRTE integration isn't right in Open MPI.
    • libopal isn't slurped into Open-MPI correctly (related to 7560)
      • Jeff and Brian will meet Friday and

master

  • Heriarchacal collectives

    • If someone wants to do, PMIx has much of this information already.
    • Not too hard to do, and they're much faster. Will be in next version of competitor MPI
    • Probably not for v5.0
  • PR7566 - can't merge until Mellanox CI testing rev.

    • How do we handle this?
    • Link on here on PR on Mellanox HPC repo.
  • Static linking is failing on master right now.

    • Issue 7560
    • May be an issue in static build support in PMIx and PRTE as well as how we're pulling it in.
    • Affects everything, just masked at the moment because static linking is broken.
    • Jeff will investigate
    • No progress.
  • SLURM PMIx plugin has been locked on PMIx v2 for some time.

    • There are some NEW PMIx calls that SHOULD be added to bring it up.
      • Ralph has started a PR, but needs help.
    • So for now, there's some optional info that won't be passed correctly.
      • No OMPI_INFO for now.
      • Ralph gets pinged occasionally.
    • Not sure priority of this.
  • MTT on master is looking pretty good.

Face to face

  • Defered.

Infrastrastructure

  • scale-testing, PRs have to opt-into it.

Review Master Master Pull Requests

CI status


Depdendancies

PMIx Update

  • CI testing only tests build and did it run, but doesn't test HOW it ran.
    • Environment setup can be a bit different.
    • For example no-permissions in /tmp. Might pass on one machine, and fail on another without /tmp permissions.

ORTE/PRRTE

MTT


Back to 2019 WeeklyTelcon-2019

Clone this wiki locally