New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

45216 continuousbenchmarking 20160609 asb 2 #2274

Closed
wants to merge 63 commits into
base: master
from

Conversation

Projects
None yet
@ashawnbandy-te-tfb
Contributor

ashawnbandy-te-tfb commented Sep 15, 2016

The work in this PR introduces a basic facility for continuous benchmarking. run-continuously is essentially a while-loop that makes 'life-cycle' calls in addition to initiating a benchmarking run. Additionally, an example of an upstart process configuration is included.

Below are some relevant notes from a recent commit:

toolset/run-continuously.sh is simply a while loop that
removes and rebuilds the framework benchmark suite and
then runs the benchmarks.

There are five environment variables to be set:

* TFB_REPOPARENT      the absolute path to the folder containing
  the repository
* TFB_REPONAME          the name of the repository folder
* TFB_REPOURI              the URI of the repository
* TFB_REPOBRANCH    the branch of the repository to use
* TFB_LOGSFOLDER    the folder to archive log files

There are four life cycle stages in run-continuously:

* tear-down-environment.sh  called upon to remove the existing
  environment
* rebuild-environment.sh    called upon to rebuild the environment
* pre-run-tests                      scripts in this folder are called
                                                before the benchmarks are run
* post-run-tests                    scripts in this folder are called
                                                after the benchmarks are run

- Notes -

run-continuously.sh generally assumes that a clone of the
appropriate repo and branch exist and have the list
scripts available and an appropriate copy of benchmark.cfg
in place.  Some effort is made to support starting states
that differ but those are not intended to be the general
case for use.

At the end of each run (post-run-tests) there are two python scripts. One zips the results.json file and sends it to a specified email address. The other makes copies of the logs for each frameworks (each are independently zipped while being copied).

TODO:

Tagging a commit for milestones.
Testing - particularly on the environment this will eventually run on.
Generating test metadata and sending it out.  (I need some sense of what is appropriate here.) 

nbrady-techempower and others added some commits Jul 12, 2016

Update web2py db.py (#2217)
You are using a debug setup.
The proposed changes should give some benefit

@knewmanTE knewmanTE referenced this pull request Sep 21, 2016

Closed

Adding pippo benchmarks #2244

@knewmanTE

This comment has been minimized.

Show comment
Hide comment
@knewmanTE

knewmanTE Sep 21, 2016

Contributor

1. FYI, I have a branch here that creates a file .commit, which contains the most recent commit of the checked out branch, adds it to the results directory, and attaches it to the email. I'm unable to open up a pull request against your fork, perhaps because you haven't been established as a FrameworkBenchmarks contributor yet?

2. I've done some testing on ServerCentral and was able to get continuous benchmarking working pretty easily, so that's cool. Good work, Shawn!

3. As for the test metadata, I have a larger pull request open at #2283 which automatically generates the test metadata in results/[timestamp]/test_metadata.json. If we want to get continuous benchmarking running before we merge in my pull request, then we might want a workaround. Another major change my pull request makes is that it removes the latest directory entirely, so it's probably worth thinking of a way to access the results/[timestamp]/ directory instead of results/latest/ when fetching the results.json file.

Since the [timestamp] subdirectory will be the only directory inside results, something like this should suffice if run between the suite run and the post-run scripts:

export TIMESTAMP_RESULTS_DIRECTORY=$(find $TFB_REPOPARENT/$TFB_REPONAME/results -mindepth 1 -maxdepth 1 -type d -name '2*')
Contributor

knewmanTE commented Sep 21, 2016

1. FYI, I have a branch here that creates a file .commit, which contains the most recent commit of the checked out branch, adds it to the results directory, and attaches it to the email. I'm unable to open up a pull request against your fork, perhaps because you haven't been established as a FrameworkBenchmarks contributor yet?

2. I've done some testing on ServerCentral and was able to get continuous benchmarking working pretty easily, so that's cool. Good work, Shawn!

3. As for the test metadata, I have a larger pull request open at #2283 which automatically generates the test metadata in results/[timestamp]/test_metadata.json. If we want to get continuous benchmarking running before we merge in my pull request, then we might want a workaround. Another major change my pull request makes is that it removes the latest directory entirely, so it's probably worth thinking of a way to access the results/[timestamp]/ directory instead of results/latest/ when fetching the results.json file.

Since the [timestamp] subdirectory will be the only directory inside results, something like this should suffice if run between the suite run and the post-run scripts:

export TIMESTAMP_RESULTS_DIRECTORY=$(find $TFB_REPOPARENT/$TFB_REPONAME/results -mindepth 1 -maxdepth 1 -type d -name '2*')

@ashawnbandy-te-tfb ashawnbandy-te-tfb changed the base branch from master to round-14 Sep 26, 2016

@ashawnbandy-te-tfb ashawnbandy-te-tfb changed the base branch from round-14 to master Sep 26, 2016

@ashawnbandy-te-tfb

This comment has been minimized.

Show comment
Hide comment
@ashawnbandy-te-tfb

ashawnbandy-te-tfb Sep 26, 2016

Contributor

Closing. Will re-open under another pull request.

Contributor

ashawnbandy-te-tfb commented Sep 26, 2016

Closing. Will re-open under another pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment