Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarks #54

Closed
pausz opened this issue May 10, 2016 · 1 comment
Closed

Benchmarks #54

pausz opened this issue May 10, 2016 · 1 comment

Comments

@pausz
Copy link
Contributor

pausz commented May 10, 2016

  • Automate benchmark execution. A script will take a .conf file and save information about execution times, memory, hardware used, etc (use this branch)

  • Write benchmark results to file.

@pausz pausz self-assigned this May 10, 2016
@pausz pausz modified the milestone: Mark I Sep 19, 2016
@pausz pausz assigned jchrispang and unassigned jchrispang Sep 22, 2016
@pausz pausz modified the milestones: Mark II, Mark I - tech Feb 8, 2017
@pausz
Copy link
Contributor Author

pausz commented Aug 16, 2017

Performance experiments have to evaluate three aspects of the software:

  1. Speed
  2. Memory usage
  3. Accuracy

1. Speed

The most straightforward way to evaluate speed is to run sequences of simulations for different sizes of the input (ie, model). The simulation parameters which dominate the overall size are:

  • number of nodes.
  • time integration step.
  • length of the simulation.
  • number of wave propagators (ie, use Stencil)
  • output size

2. Memory usage

The parameters that will dominate memory usage are

  • total number of nodes (proportional to number of nodes ** populations)
  • number of integration steps ( proportional to simulation length/integration time step)

3. Accuracy

TBD

Additional nf_benchmarks functionality

  • Save additional times like user and sys and % cpu usage.
  • Add variable NUMBER_OF_PROPAGATORS to number of wave proagators (not feeling strong about being too specific, but the wave propagator has the most resource consuming algorithm).
  • Dump info about output size (INTERVAL). This would be easier if Output had a 'None' mode.
  • Read an input number number of trials (eg, number of times to execute a particular config file, default value =1) so that we can run nf_benchmarks --to-mem <config_filename> --num-trials 10

Other related issues

  1. Tracking becnhmark results.
    The decision was to not keep benchmark results under git. Results on different platforms will be added to the wiki page.
  2. Select a collection of files that will be the default set when invoking nf_benchmarks.
    • We will use two known models with different number of populations (ei, eirs).
    • Produce variations of them by changing: number of nodes, simulation length, number of wave propagators and output size. That gives us a total of 10 default benchmark files. Execution of all the files should be under 1h.
  3. Remove current benchmark files and move them to the paper repo. They belong there. Run the benchmarks using the new nf_benchmarks utility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants