Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Benchmark results folder #1

Closed
larsiusprime opened this issue Aug 20, 2015 · 1 comment
Closed

Add Benchmark results folder #1

larsiusprime opened this issue Aug 20, 2015 · 1 comment

Comments

@larsiusprime
Copy link
Collaborator

Each test should have a results folder

Frameworks_test\Bunnmark\results

and the same for any future benchmarks.

Each results folder should have a file for each framework:

openfl.md
kha.md
nme.md
luxe.md

Etc.

Then, people should run benchmarks and add their results to the relevant file in a standard format.

I'm thinking a simple thing like this for bunnymark (assumption is greatest number of bunnies while maintaining 60 FPS), perhaps in Json format?

{
  "user":"larsiusprime",
  "target":"windows",
  "mode":"release",
  "bunnies":50000,
  "systemdata":
  {
    "OS":"Windows 7 SP1",
    "RAM":"8387064 KB (8 GB)",
    "CPU":"Intel(R) Core(TM)2 Duo CPU     E7400  @ 2.80GHz",
    "GPU":"ATI Radeon HD 4800 Series, driver v. 8.970.100.1100"
  },
  "active_displays":1,
  "resolution":[1920,1080],
  "no_other_programs_running":true
}

My crashdumper library has system scanning utilities, btw:
https://github.com/larsiusprime/crashdumper

@larsiusprime
Copy link
Collaborator Author

ALSO!

Inside the results folder should be a README.md that explains quite explicitly what results we are looking for and what circumstances.

In BunnyMark, this should require:

  1. The maximum # of bunnies you can achieve while maintaining 60 FPS.
  2. Close ALL OTHER applications and windows while running the benchmark.
  3. Disable all but one monitor/display.
  4. Set your monitor's display resolution to 1920x1080p if possible, otherwise note the digression in your report

Also all the bunnymarks should display in the same sized window, if possible, and all details like this should be standardized across benchmarks if we're going to have meaningful results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant