Skip to content

Solver Penalty resets to 0.0 in MPI Parallel runs involving Velocity BCs (Moving Wall) #746

@ArcCambrian

Description

@ArcCambrian

Hi there,

I am encountering a critical issue when running UWGeodynamics models on an HPC cluster using MPI (Parallel mode).

When I apply a specific velocity boundary condition (e.g., a "Moving Wall" / Push-from-rear setup) in a parallel environment, the solver's Penalty parameter appears to reset to 0.000000 during the run, regardless of what I set in the Python script using Model.solver.set_penalty().

This causes the solver to fall back to a default BSSCR configuration with zero penalty, resulting in a singular matrix (or extremely poor conditioning). The pressure solve iterations skyrocket (e.g., >390 iterations), causing the model to stall or run extremely slowly.

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions