Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LB boundary slice and node setters are broken #4859

Closed
jngrad opened this issue Feb 6, 2024 · 3 comments · Fixed by #4864
Closed

LB boundary slice and node setters are broken #4859

jngrad opened this issue Feb 6, 2024 · 3 comments · Fixed by #4864
Assignees
Labels
Bug Core waLBerla Issues regarding waLBerla integration

Comments

@jngrad
Copy link
Member

jngrad commented Feb 6, 2024

The lattice model sets a blockforest::communication::UniformBufferedScheme<typename stencil::D3Q27> communicator to exchange information about the pdfs, forces, velocities, and the flag field:

m_pdf_streaming_communication->addPackInfo(
std::make_shared<field::communication::PackInfo<FlagField>>(
m_flag_field_id, n_ghost_layers));

However, the communication doesn't actually exchange the velocity bounce back flag or normals with neighboring domains. LB boundaries set up via the Python slice setter will only update local domains, and their ghost layer will never be marked as boundaries. Boundaries located at the interface with neighboring domains act as sinks, since fluid can enter them from neighbor domains, but cannot escape them due to bounce-back conditions.

MWE:

import numpy as np
import espressomd.lb
import espressomd.shapes

with_bug = True  # change this
with_plot = False  # change this
AGRID = .25
EXT_FORCE = .1
KINEMATIC_VISC = 2.7
DENS = 1.7

def poiseuille_flow(z, H, ext_force_density, dyn_visc):
    return ext_force_density * 1. / (2 * dyn_visc) * (H**2.0 / 4.0 - z**2.0)

system = espressomd.System(box_l=[9.0, 3.0, 3.0])
system.time_step = 0.07
system.cell_system.skin = 0.4 * AGRID
system.lb = lbf = espressomd.lb.LBFluidWalberla(
    agrid=AGRID, density=DENS, kinematic_viscosity=KINEMATIC_VISC,
    tau=system.time_step, ext_force_density=[0.0, 0.0, EXT_FORCE])

if with_bug:
    lbf[:2, :, :].boundary = espressomd.lb.VelocityBounceBack([0., 0., 0.])
else:
    wall_shape1 = espressomd.shapes.Wall(normal=[1, 0, 0], dist= 2 * AGRID)
    lbf.add_boundary_from_shape(wall_shape1)

lb_momentum = [np.linalg.norm(system.analysis.linear_momentum())]
mid_indices = (system.box_l / AGRID / 2).astype(int)
diff = float("inf")
old_val = lbf[mid_indices].velocity[2]
while diff > 0.005:
    system.integrator.run(10)
    lb_momentum.append(np.linalg.norm(system.analysis.linear_momentum()))
    new_val = lbf[mid_indices].velocity[2]
    diff = abs(new_val - old_val)
    old_val = new_val

for _ in range(200):
    system.integrator.run(10)
    lb_momentum.append(np.linalg.norm(system.analysis.linear_momentum()))

v_measured = [np.mean(lbf[i, :, :].velocity[:, :, 2]) for i in range(lbf.shape[0])]
v_expected = poiseuille_flow(
    (np.arange(lbf.shape[0]) + 0.5) * AGRID - (0.5 * system.box_l[0] + AGRID),
    system.box_l[0] - 2.0 * AGRID, EXT_FORCE, KINEMATIC_VISC * DENS)

if with_plot:
    import matplotlib.pyplot as plt
    plt.plot(v_expected[2:], "-", label="Theory")
    plt.plot(v_measured[2:], "o", label="Simulation")
    plt.xlabel("Channel position")
    plt.ylabel("Fluid velocity")
    plt.legend()
    plt.show()
    plt.plot(lb_momentum)
    plt.xlabel("Simulation time")
    plt.ylabel("Fluid momentum")
    plt.show()

print(f"{system.cell_system.node_grid=}")
np.testing.assert_allclose(v_measured[2:], v_expected[2:], rtol=5E-5)

Execution:

mpiexec -n 2 ./pypresso mwe.py

Output:
Poiseuille flow with the bug: the simulation data points do not match with the analytical solution: the velocity magnitude is lower everywhere, and non-zero at the right boundary (MPI rank 2).

For longer simulation times, the fluid velocity gets smaller in magnitude, even though the fluid total momentum remains constant. This bug only affects simulations running with 2 or more MPI ranks.

LB boundaries set up with shapes are not affected by this bug: boundary conditions are calculated for all grid points using the shape, ghost layer included. The slice setter has a different behavior: the slice is further divided into smaller chunks according to the Cartesian topology, in an effort to reduce the communication overhead for large slices.

@jngrad jngrad added Bug Core waLBerla Issues regarding waLBerla integration labels Feb 6, 2024
@jngrad
Copy link
Member Author

jngrad commented Feb 7, 2024

Update: the flag field is actually properly communicated, but we never wrote a communicator for the bounce-back velocity map, and the slice update function doesn't automatically recompute the IndexInfo vectors when only the ghost layer of the flag field is modified.

@jngrad jngrad self-assigned this Feb 7, 2024
@jngrad
Copy link
Member Author

jngrad commented Feb 8, 2024

The node setters have the exact same bug. This is an issue for circular Couette flow, which are set up by node setters.

@jngrad jngrad changed the title LB boundary slice setter is broken LB boundary slice and node setters are broken Feb 8, 2024
@jngrad
Copy link
Member Author

jngrad commented Feb 15, 2024

Here is my understanding so far: we must always run a full communication after each call to a node setter or slice setter. This is already the case for the velocity, force, flag and population fields, but not for the UBB map. We cannot introduce a flag to track pending changes introduced by setter functions and trigger a ghost communication whenever a getter function is called, because this would break the constness of the getters, and would also require a mechanism to synchronize the state of the flag between all MPI ranks, which is tedious since we don't include MPI header files. In addition, there are valid use cases for getting the state of the ghost layer before the ghost update, for example during particle coupling where the particle force is interpolated on the fluid while the fluid force is interpolated on the particle.

For completeness, here is how the pending status flag would have worked:

class LBWalberlaImpl {
  /** Flags for pending ghost layer communication. */
  enum PendingGhostLayerCommunicationFlag : unsigned int {
    /** No pending field update. */
    PENDING_COMM_NONE = 0u,
    /** Pending changes to pdf/velocity/force fields. */
    PENDING_COMM_GHOST = 1u,
    /** Pending changes to boundary fields. */
    PENDING_COMM_BOUNDARY = 2u,
  };

  unsigned int m_pending_changes = PENDING_COMM_NONE;

public:
  void ghost_communication() override {
    if (m_pending_changes & PENDING_COMM_BOUNDARY) {
      reallocate_ubb_field();
      ghost_communication_boundary();
      ghost_communication_pdfs();
    } else if (m_pending_changes & PENDING_COMM_GHOST) {
      ghost_communication_pdfs();
    }
  }

  void ghost_communication_boundary() {
    m_boundary_communicator->communicate();
    m_pending_changes &= ~PENDING_COMM_BOUNDARY;
  }

  void ghost_communication_pdfs() {
    m_pdfs_communicator->communicate();
    m_pending_changes &= ~PENDING_COMM_GHOST;
  }

  bool set_node_velocity(Utils::Vector3i const &node, Utils::Vector3d const &vel) {
    m_pending_changes |= PENDING_COMM_GHOST;
    auto bc = get_block_and_cell(get_lattice(), node, false);
    if (bc) {
      /* ... */
    }
    return bc.has_value();
  }
};

@kodiakhq kodiakhq bot closed this as completed in #4864 Feb 26, 2024
kodiakhq bot added a commit that referenced this issue Feb 26, 2024
Fixes #4859, fixes #4855

Description of changes:
- bugfix:
   - LB boundaries are now properly communicated in the ghost layer
   - see details in #4859
- performance:
   - the LB flag field is no longer communicated at every time step
   - the LB UBB field is no longer recalculated at every time step
   - LB boundary setters (node, slice, shape) now always trigger a full ghost communication
- maintainability:
   - the waLBerla header files are no longer visible in the ESPResSo core and script interface
   - the Boost dependency is now checked at the CMake level to prevent building broken ESPResSo shared libraries (see details in #4856)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Core waLBerla Issues regarding waLBerla integration
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant