Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a benchmark utility #12

Closed
jmgq opened this issue Oct 19, 2014 · 1 comment · Fixed by #14
Closed

Create a benchmark utility #12

jmgq opened this issue Oct 19, 2014 · 1 comment · Fixed by #14
Milestone

Comments

@jmgq
Copy link
Owner

jmgq commented Oct 19, 2014

Inspired by the script provided by @pathway in #5 (examples/Terrain/scaletest01.php), I think it would be a good idea to have a benchmark utility that worked in more general cases. The benchmark utility should be able to:

  1. Use any terrain defined in a text file, as opposed to a hard-coded terrain.
  2. Customize the for loop (terrain size increase per loop iteration).
  3. Adjust the terrain on each iteration of the for loop. The script in Scale #5 always use the full terrain array, but I think that the main terrain should be cropped to the desired size on each iteration.
  4. Create a terrain generator. It should accept a size and a destination file path as parameters, as well as the minimum and maximum tile cost.
  5. As an alternative (or complement) to item 1, use a random terrain. The downside is that we wouldn't be able to use the same terrain in two different benchmarks, so in this case, the for loop should be executed several times and then display averages. Or it could accept the RNG's seed as an optional parameter, that way the same terrain can be generated in different benchmarks, and there is no need to run the benchmark several times or calculate averages.
  6. Use the symfony/Console component, as it is a very convenient library to create console commands, including utilities like progress bars, tables, and so on.
  7. After the benchmark is run, display the used configuration and a table with the results.
@pathway
Copy link

pathway commented Oct 20, 2014

Exciting!

We can then report for each run things like:

  • memory usage (Im a bit concerned about that right now)
  • timing
  • of nodes expanded

...and any other metrics.

This will give us excellent visibility into the effects of any proposed changes to the algorithm or data structures.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants