You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Performance experiments have to evaluate three aspects of the software:
Speed
Memory usage
Accuracy
1. Speed
The most straightforward way to evaluate speed is to run sequences of simulations for different sizes of the input (ie, model). The simulation parameters which dominate the overall size are:
number of nodes.
time integration step.
length of the simulation.
number of wave propagators (ie, use Stencil)
output size
2. Memory usage
The parameters that will dominate memory usage are
total number of nodes (proportional to number of nodes ** populations)
number of integration steps ( proportional to simulation length/integration time step)
3. Accuracy
TBD
Additional nf_benchmarks functionality
Save additional times like user and sys and % cpu usage.
Add variable NUMBER_OF_PROPAGATORS to number of wave proagators (not feeling strong about being too specific, but the wave propagator has the most resource consuming algorithm).
Dump info about output size (INTERVAL). This would be easier if Output had a 'None' mode.
Read an input number number of trials (eg, number of times to execute a particular config file, default value =1) so that we can run nf_benchmarks --to-mem <config_filename> --num-trials 10
Other related issues
Tracking becnhmark results.
The decision was to not keep benchmark results under git. Results on different platforms will be added to the wiki page.
Select a collection of files that will be the default set when invoking nf_benchmarks.
We will use two known models with different number of populations (ei, eirs).
Produce variations of them by changing: number of nodes, simulation length, number of wave propagators and output size. That gives us a total of 10 default benchmark files. Execution of all the files should be under 1h.
Remove current benchmark files and move them to the paper repo. They belong there. Run the benchmarks using the new nf_benchmarks utility.
Automate benchmark execution. A script will take a .conf file and save information about execution times, memory, hardware used, etc (use this branch)
Write benchmark results to file.
The text was updated successfully, but these errors were encountered: