Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decrease in application performance overtime; occasional spikes of major slowdown #1008

Closed
brycelelbach opened this issue Nov 10, 2013 · 9 comments

Comments

@brycelelbach
Copy link
Member

HPX applications running with the MPI parcelport in distributed at scale experience a linear degradation in performance as application runtime increases. Also, periodically, there are spikes of massive slowdown.

This can be demonstrated by running the future_hang_on_get_629 regression test with the MPI parcelport like so:

mpirun -x LD_LIBRARY_PATH="$LD_LIBRARY_PATH" -machinefile machinefile -bynode -np 16 /home/wash/install/octopus/intel-13.0.1-release/bin/future_hang_on_get_629_test --verbose --depth=5 --test-runs=0

This regression test spawns a recursive tree (with a certain number of children per node, in this case, we use the default of 8), and calls the standard busy work null_function on each node of the tree. The depth of the tree is, in this case, 5 levels deep (analogous to 5 levels of refinement). We run this test for a certain number of iterations (e.g. timesteps)

The behavior demonstrated by this test case is very similar to the performance issues that the Octopus 3D torus simulation encounters. Here are some graphs demonstrating the problem. They plot the timestep speed (e.g. timesteps/second) for each timestep; e.g. instantaneous speed. Note that the first two graphs are logscaled.

629 regresison test, logscaled
Octopus, logscaled
629 regression test
Octopus

@sithhell
Copy link
Member

Since you explicitly mention the MPI parcelport, does this happen with the TCP parcelport as well?
Could you please try to increase the number of max requests for the MPI parcelport (hpx.parcel.mpi.max_requests)? The default is 256 which might be too less for your application.

@brycelelbach
Copy link
Member Author

The problem does not appear to show up with the TCP parcelport, but I'm not sure - the TCP parcelport is significantly slower which makes it harder to get enough data to draw any conclusions.

I'll try the max_requests thing.

@brycelelbach
Copy link
Member Author

The max_requests suggestion fixes neither issue.

@hkaiser
Copy link
Member

hkaiser commented Jan 29, 2014

Some preliminary analysis showed that this is probably caused by a growing number of allocated stack segments. It is still unclear why/when this is happening.

@hkaiser hkaiser closed this as completed Mar 25, 2014
@hkaiser hkaiser reopened this Mar 25, 2014
@hkaiser hkaiser modified the milestones: 0.9.9, 0.9.8 Mar 25, 2014
@sithhell
Copy link
Member

sithhell commented Jun 3, 2014

Is this still a problem?

@brycelelbach
Copy link
Member Author

Yes

@hkaiser hkaiser modified the milestones: 1.0.0, 0.9.9 Sep 13, 2014
@sithhell
Copy link
Member

Do we still have this problem?

@hkaiser
Copy link
Member

hkaiser commented Feb 24, 2015

I'm pretty sure this is caused by memory fragmentation increasing overtime. I don't have any other explanation. So I'm not sure we can do anything about this.

@hkaiser
Copy link
Member

hkaiser commented Nov 7, 2015

This has been fixed. Please reopen if appropriate (see #1753)

@hkaiser hkaiser closed this as completed Nov 7, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants