Decrease in application performance overtime; occasional spikes of major slowdown #1008
HPX applications running with the MPI parcelport in distributed at scale experience a linear degradation in performance as application runtime increases. Also, periodically, there are spikes of massive slowdown.
This can be demonstrated by running the future_hang_on_get_629 regression test with the MPI parcelport like so:
This regression test spawns a recursive tree (with a certain number of children per node, in this case, we use the default of 8), and calls the standard busy work null_function on each node of the tree. The depth of the tree is, in this case, 5 levels deep (analogous to 5 levels of refinement). We run this test for a certain number of iterations (e.g. timesteps)
The behavior demonstrated by this test case is very similar to the performance issues that the Octopus 3D torus simulation encounters. Here are some graphs demonstrating the problem. They plot the timestep speed (e.g. timesteps/second) for each timestep; e.g. instantaneous speed. Note that the first two graphs are logscaled.
Since you explicitly mention the MPI parcelport, does this happen with the TCP parcelport as well?