Adding pippo benchmarks#2244
Conversation
|
Thanks for the pull request! Just wanted to apologize for the delay in reviewing this and merging it in; we're in the thick of preparing everything for our Round 13 release. As soon as Round 13 is over, we'll get to the Round 14 pull requests! |
|
Hey @robertoestivill, two things I noticed when doing some testing of Pippo:
|
|
Yes. We were building the benchmark against a SNAPSHOT version of Pippo that included a new feature. There has been a stable release of Pippo since then, therefore I would say the old snapshot is not longer available. I will update the PR right now. |
|
Great, thanks @robertoestivill! One last thing: make sure to add a line to the .travis.yml file so that Travis will automatically run a verification test for your framework. |
|
@knewmanTE done! Thanks for the guidance :) BTW, do all the test run periodically ? Is there any chance to see results with the current state of the PR ? |
|
@robertoestivill funny you should ask that. Currently, we're running all of the tests manually, and our results website also requires some manual configuration to display the results. Our general process is:
We're currently in step 1 for our Round 13. We ran our first preview a couple weeks ago and are finishing up a few minor things before publishing our second preview soon. But the good news is that we're working on setting up a continuous benchmarking system. You can see the preliminary efforts in #2274. Our end goal here is to create an updated web app that is constantly updated as new continuous benchmarking results are finished. Since your pull request is adding an entirely new framework, its official results won't be up until Round 14, but there should be a much shorter delay between round 13 and round 14 (we ran into some technical snags when switching production environments between rounds 12 and 13 that took a while to fix), and with the rate that our continuous benchmarking toolset is developing, we'll hopefully have something to show for that pretty soon too! |
|
@robertoestivill it looks like your postgres and mongo tests are failing. |
|
@robertoestivill also regarding using snapshots.. Are you removing snapshots because you're pre 1.0? We don't want to include things in the test suite that could break in a few weeks if the snaphots are being removed. |
|
@knewmanTE I will be taking a look early next week. I haven't go through the build logs extensively, but looks like it took way longer than any other build (32minutes) @nbrady-techempower we will not use snapshots for the tests. It was a one time thing because a required feature was added and we needed to finish the structure of this benchmark tests. We will stick to released versions. Also, Im not a pippo maintainer, I'm just contributing to the benchmark, so I could not tell you how versioning/releases will be created in the future. |
|
@robertoestivill the length of the run is probably a result of a couple factors:
|
|
Seems there are errors occurring in the |
|
Hi @robertoestivill ! This looks really close. One thing to note with the new setup changes is that database dependencies now need to be included in the setup files. For instance, if a test relies on mysql, you now need to |
Adding benchmarks for Pippo