New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes / Updates / Dates for Round 22 #7475
Comments
i was just reading a thread on reddit about latest results and a commenter mentioned that "Stripped" scores are included in the composite results. I didn't think this was allowed/possible, but it turns out this is in fact the case, for actix at least. the /json and /plaintext results in the composite scores for actix are from the "server" configuration which is marked as "Stripped". is this correct or a mistake in the collation of the results? |
The sql queries currently tested are returning too fast. It is suggested to add memory usage indicator as one of the scoring criteria. |
@billywhizz You're correct. That shouldn't be the case. That implementation approach was changed and never updated for the composite scores. Will see what I can do. |
@nbrady-techempower yes - i didn't think it was possible when i saw the comment so glad i checked. by my calculation this would give actix a composite score of 6939, moving it down to ninth place behind officefloor, aspnet.core, salvo and axum. i haven't checked if same is happening with any others. |
It is not problem only from the TechEmpower people. A lot of people don't understand a benchmark. |
When I said that we need to clarify the rules is for that: We are changing the rules for that people, but all need to follow the rules. It's like with Faf #7402, we all can learn from that, for good or bad. For some time, the length of the server name is discussed but still without a solution. Before was a problem with the urls. etc, etc |
Also I want to see which frameworks are using pipeline in plaintext. |
Another big problem, that a lot of us have it is the servers. We have enough information to inform the Ops. Only with the kernel change #7321 or new servers. They make a very big impact, more than the fw that we use. |
Another question, |
@billywhizz |
More questions Is it realistic to have different variants and configs for every test?? If the fw is using JIT is very beneficial, but not realistic. |
@joanhey yes - i tend to think there should be a single entry/configuaration allowed per framework and it should be the same codebase that covers all the tests. this would be much more "realistic" and would also massively decrease the amount of time a full test run takes - some frameworks have 10 or more different configuarations that have to be tested! |
I understand variants for different db or driver. In the same way, some fws use only 1 variant but the config to the bd is different for every test. |
@joanhey good point re. different databases - but apart from that i think the number of configurations per framework should be minimised. i also think it would work better overall if a run was only triggered when a framework changed rather than continually running every framework end to end. if we only ran on every merge just for the changed framework then maintainers would have to wait a lot less time to see results of changes. my worry about introducing too many and too complex rules is it will just discourage people from entering at all, so there is a balance to be found between too many rules and allowing for innovation in the approaches. |
Shared my thoughts on #7475 (comment) here #7358 (reply in thread) trying to both give my opinion but still convey that there are things, assuming what the purpose of the benchmark is, that should change a bit... although I agree with #7475 (comment) to not making it too complex to avoid folks to not get in. |
We can create an addendum, for those 1-2% devs who try to lie. With the more esoteric tricks. About the run only with the changed frameworks. |
@nbrady-techempower https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/frameworks/Python/robyn/app.py |
Is it possible to get the tested framework version? is it just me or should people be allowed to state |
The result visualization for every round has a link to the continuous benchmarking run that has been used (for example, round 21 is based on run edd8ab2e-018b-4041-92ce-03e5317d35ea). From the run you can get the commit ID, so that you can browse the repository at the respective revision. Then check the Dockerfile that corresponds to the test implementation you are interested in (and possibly any associated scripts in the implementation directory) to get the framework version that has been used. Unfortunately not all implementations keep their dependencies locked down properly - in that case your best bet is probably to check the build logs from the run. If that does not help, then I am afraid that other than making a guesstimate, you are out of luck. |
Hi, @nbrady-techempower do you guys decide on dates for Round 22? |
@nbrady-techempower I noticed an issue that seems to have appeared back in December after continouous benchmarking started running properly again - the Dstat data is missing. |
@fafhrd91 Nothing concrete yet. I've got to get in front of the servers and do some upgrades. I'd like to shoot for late March. @volyrique Thanks, I'll take a look! |
@nbrady-techempower I had a closer look at the Dstat issue and it looks like a common problem. Unfortunately the tool appears to be unmaintained, and the closest thing to a drop-in replacement seems to be Dool. |
@nbrady-techempower if there is not going to be an update to the benchmarks soon can you please remove People unfortunately use these benchmarks to make technology decisions and if the data is wrong for long periods it is impacting us directly. |
@graemerocher I'm sorry to hear about this. I can get the round 21 results from |
@nbrady-techempower the history of what happened is in this thread #7618 Thanks for helping. |
@nbrady-techempower Is there a tentative date for Round 22? |
It's been almost a year since I said I'd like to start having more regular rounds... 😵💫 So, I think the biggest thing here was getting Citrine updated. Though I think all the things on the checklist are important, clarifying rules is a never-ending process, and if anyone thinks there's some in clear violation of any rules, please open a PR or an issue and ping me and the maintainers. Otherwise, I think we'll shoot for the first complete run in August. |
Good anticipation |
@nbrady-techempower requesting this again as we keep getting questions. Please remove the invalid round 21 results for micronaut |
@graemerocher This is done. Please note that we have added a https://github.com/TechEmpower/FrameworkBenchmarks/wiki/Codebase-Framework-Files everyone: We're waiting on a few missing pieces for the new rack in the new place. We're hoping to be up and running on Thursday. |
Thanks |
May I ask when the new round of benchmark results will be released? |
@alfiver As soon as we get Citrine back online we're going to do a few full runs and then the next round. We'll be investigating the issues this week and have an update for everyone around Thursday. |
Sorry, folks. No good news yet. One of the machines won't stay on. We're looking into it and will update when we can. |
Ok, we got lucky! It was just the system battery. The servers are moved to their new home and looks like we're back up and running. Going to finish a full run to make sure everything looks good and then we'll send out a notice for when we're locking PRs for the this round. |
I know this is a known issue and just want to point out the same is happening to xitca-web too. An unrealistic benchmark is counted towards total composite score unfortunately. It would be best if the misleading can be fixed in an official run. If a quick fix is not possible I suggest both xitca and actix mark their unrealistic bench as "broken" temporary until round 22 is finished. |
Excuse me, do you have any good news? |
Unfortunately we had some other issues come up, but hopefully resolved. The latest is here. |
With the last run looking back to normal, it's time to actually set some dates for Round 22! The run in progress will complete around 9/26. The following complete run will be a preview run. And we'll look to start the round run on 10/3. We normally lock PRs down during the preview run. I would caution any maintainers on making adjustments to their frameworks during that time. As a reminder, we don't rerun individual frameworks for completed runs. |
Please wait for .NET 8 LTS release if it's not already accounted for. The release is planned for 14th of November this year. |
Wait, for the next run, than you don't know the results .... |
We had an internet outage that looks like it stopped the preview round run / communication to tfb-status. I'll be in the office tomorrow to see if the preview round completed successfully and kick of the official round. |
Someone was able to restart the service. Since the preview round wasn't able to complete, we'll do one more preview round and move the Round 22 official run to start around Oct 11th. |
could someone review and merge #8478 before the run. Thanks. |
@nbrady-techempower any news? :D |
@macel94 It's running: https://tfb-status.techempower.com/ |
Round 21 has concluded. The results will be posted shortly. We'd like to have a quicker turnaround for Round 22. We're hoping between Oct - Nov.
Checklist
Update Citrine to Ubuntu 22.04 #7321
Verify routing matches exact path #6967 (comment)
Requirements in Fortunes test #6883
Some fraweworks don't follow the rules #6788
[Requirements] Add that Response cache is not permitted in Fortunes test #6529
The text was updated successfully, but these errors were encountered: