Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign up45216 continuousbenchmarking 20160609 asb 2 #2274
Conversation
nbrady-techempower
and others
added some commits
Jul 12, 2016
knewmanTE
added some commits
Sep 19, 2016
knewmanTE
added some commits
Sep 21, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
knewmanTE
Sep 21, 2016
Contributor
1. FYI, I have a branch here that creates a file .commit, which contains the most recent commit of the checked out branch, adds it to the results directory, and attaches it to the email. I'm unable to open up a pull request against your fork, perhaps because you haven't been established as a FrameworkBenchmarks contributor yet?
2. I've done some testing on ServerCentral and was able to get continuous benchmarking working pretty easily, so that's cool. Good work, Shawn!
3. As for the test metadata, I have a larger pull request open at #2283 which automatically generates the test metadata in results/[timestamp]/test_metadata.json. If we want to get continuous benchmarking running before we merge in my pull request, then we might want a workaround. Another major change my pull request makes is that it removes the latest directory entirely, so it's probably worth thinking of a way to access the results/[timestamp]/ directory instead of results/latest/ when fetching the results.json file.
Since the [timestamp] subdirectory will be the only directory inside results, something like this should suffice if run between the suite run and the post-run scripts:
export TIMESTAMP_RESULTS_DIRECTORY=$(find $TFB_REPOPARENT/$TFB_REPONAME/results -mindepth 1 -maxdepth 1 -type d -name '2*')
|
1. FYI, I have a branch here that creates a file 2. I've done some testing on ServerCentral and was able to get continuous benchmarking working pretty easily, so that's cool. Good work, Shawn! 3. As for the test metadata, I have a larger pull request open at #2283 which automatically generates the test metadata in Since the
|
knewmanTE
and others
added some commits
Sep 22, 2016
ashawnbandy-te-tfb
changed the base branch from
master
to
round-14
Sep 26, 2016
ashawnbandy-te-tfb
changed the base branch from
round-14
to
master
Sep 26, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
Closing. Will re-open under another pull request. |
ashawnbandy-te-tfb commentedSep 15, 2016
The work in this PR introduces a basic facility for continuous benchmarking. run-continuously is essentially a while-loop that makes 'life-cycle' calls in addition to initiating a benchmarking run. Additionally, an example of an upstart process configuration is included.
Below are some relevant notes from a recent commit:
At the end of each run (post-run-tests) there are two python scripts. One zips the results.json file and sends it to a specified email address. The other makes copies of the logs for each frameworks (each are independently zipped while being copied).
TODO: