Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign up45216 continuousbenchmarking 20160926 asb 4 #2292
Conversation
knewmanTE
and others
added some commits
Aug 1, 2016
ashawnbandy-te-tfb
added some commits
Sep 26, 2016
ashawnbandy-te-tfb
added
the
Enhance: Toolset
label
Sep 26, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Show comment
Hide comment
ashawnbandy-te-tfb
Sep 26, 2016
Contributor
Also, I will defer the work related to @knewmanTE's PR #2283 until after it has been merged into master.
|
Also, I will defer the work related to @knewmanTE's PR #2283 until after it has been merged into master. |
knewmanTE
merged commit 8ce974a
into
TechEmpower:master
Sep 27, 2016
1 check was pending
continuous-integration/travis-ci/pr
The Travis CI build is in progress
Details
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Show comment
Hide comment
|
Merge'd! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
ashawnbandy-te-tfb commentedSep 26, 2016
The work in this PR introduces a basic facility for continuous benchmarking. run-continuously is essentially a while-loop that makes 'life-cycle' calls in addition to initiating a benchmarking run. Additionally, an example of an upstart process configuration is included.
Below are some relevant notes from a recent commit:
toolset/run-continuously.sh is simply a while loop that
removes and rebuilds the framework benchmark suite and
then runs the benchmarks.
There are five environment variables to be set:
the repository
There are four life cycle stages in run-continuously:
environment
before the benchmarks are run
after the benchmarks are run
run-continuously.sh generally assumes that a clone of the
appropriate repo and branch exist and have the list
scripts available and an appropriate copy of benchmark.cfg
in place. Some effort is made to support starting states
that differ but those are not intended to be the general
case for use.
At the end of each run (post-run-tests) there are two python scripts. One zips the results.json file and sends it to a specified email address. The other makes copies of the logs for each frameworks (each are independently zipped while being copied).
Changes made following the previous PR for this work: