This repository was archived by the owner on Mar 24, 2026. It is now read-only.
45216 continuousbenchmarking 20160926 asb 4#2292
Merged
knewmanTE merged 35 commits intoTechEmpower:masterfrom Sep 27, 2016
Merged
45216 continuousbenchmarking 20160926 asb 4#2292knewmanTE merged 35 commits intoTechEmpower:masterfrom
knewmanTE merged 35 commits intoTechEmpower:masterfrom
Conversation
remove misultin framework benchmark
You are using a debug setup. The proposed changes should give some benefit
TechEmpower#2237) * Postgres/MySQL: Restrict database permissions to just what is required Some frameworks were misbehaving and changing the schema. Also remove upper-case verions of MySQL tables as those are not needed. * Fix GRANT syntax error
toolset/run-continuously.sh is simply a while loop that
removes and rebuilds the framework benchmark suite and
then runs the benchmarks.
There are five environment variables to be set:
* TFB_REPOPARENT the absolute path to the folder containing
the repository
* TFB_REPONAME the name of the repository folder
* TFB_REPOURI the URI of the repository
* TFB_REPOBRANCH the branch of the repository to use
* TFB_LOGSFOLDER the folder to archive log files
There are four life cycle stages in run-continuously:
* tear-down-environment.sh called upon to remove the existing
environment
* rebuild-environment.sh called upon to rebuild the environment
* pre-run-tests scripts in this folder are called
before the benchmarks are run
* post-run-tests scripts in this folder are called
after the benchmarks are run
- Notes -
run-continuously generally assumes that a clone of the
appropriate repo and branch exist and have the list
scripts available and an appropriate copy of benchmark.cfg
in place. Some effort is made to support starting states
that differ but those are not intended to be the general
case for use.
… testing purposes.
…nto 45216-continuousbenchmarking-20160609-asb-2
…enchmarking-20160609-asb-2
…nt.sh and email-results.sh.
Contributor
Author
|
Also, I will defer the work related to @knewmanTE's PR #2283 until after it has been merged into master. |
Contributor
|
Merge'd! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The work in this PR introduces a basic facility for continuous benchmarking. run-continuously is essentially a while-loop that makes 'life-cycle' calls in addition to initiating a benchmarking run. Additionally, an example of an upstart process configuration is included.
Below are some relevant notes from a recent commit:
toolset/run-continuously.sh is simply a while loop that
removes and rebuilds the framework benchmark suite and
then runs the benchmarks.
There are five environment variables to be set:
the repository
There are four life cycle stages in run-continuously:
environment
before the benchmarks are run
after the benchmarks are run
run-continuously.sh generally assumes that a clone of the
appropriate repo and branch exist and have the list
scripts available and an appropriate copy of benchmark.cfg
in place. Some effort is made to support starting states
that differ but those are not intended to be the general
case for use.
At the end of each run (post-run-tests) there are two python scripts. One zips the results.json file and sends it to a specified email address. The other makes copies of the logs for each frameworks (each are independently zipped while being copied).
Changes made following the previous PR for this work: