Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically generate performance report #118

Closed
fkleuver opened this issue Aug 30, 2018 · 8 comments
Closed

Automatically generate performance report #118

fkleuver opened this issue Aug 30, 2018 · 8 comments
Assignees
Labels
Topic: Build/CI/CD Only for internal issues affecting Aurelia team
Projects
Milestone

Comments

@fkleuver
Copy link
Member

Now that we have some e2e benchmark scripts, I'd like to see us have something a bit similar to the krausest benchmark with a generated HTML report showing:

  • The perf difference between vNext and vCurrent
  • The perf difference between the master branch and the PR being tested

The reason I didn't simply copy-paste their code is because I prefer fully understanding how it works, and then custom-tailoring it to include more specific information that is useful to us. Then perhaps in the future we can contribute some of our findings back to the krausest repo and hopefully improve the overall quality and accuracy of benchmark land

@fkleuver fkleuver self-assigned this Sep 8, 2018
@EisenbergEffect EisenbergEffect added this to the 0.8.0 milestone Oct 12, 2018
@fkleuver
Copy link
Member Author

Additional benchmarks to look at:
https://localvoid.github.io/uibench/

@Alexander-Taran
Copy link
Contributor

@fkleuver I think I've spent enough time with krausest benchmark to know what and how it does it..
I can try to set it up..
We'll have to talk details over

@fkleuver
Copy link
Member Author

fkleuver commented Nov 7, 2018

Thanks, that would be awesome!

My initial idea, for starters, was to simply duplicate the effective benchmarks but simplifying/cleaning up the infrastructure around it. I think the benchmarks as-is are pretty decent but we need some additional benchmarks as well, so we'll eventually need to break it open a bit and add some more interesting tests in there. But that's for later.
At the moment I'm primarily interested in:

  1. How vNext compares to vCurrent (in startup time, repaint rate, memory use, the usual stuff)
  2. How different versions of vNext compare to each other

I don't know if you had any particular approach in mind but I'd suggest to start simple and just add the krausest repo to a PR as-is, stripping out every framework except vNext and vCurrent, and just work / clean it up from there.
As long as the npm scripts are present in the package.json to bootstrap and run the tests, and some html report is produced, I can make sure it's ran automatically in CI

@Alexander-Taran
Copy link
Contributor

I think there should be a workflow.. to "add" vnext by tag..
so a task that'll "add" a new framework getting specific version..
and for CI it could be "dev"
but if we want to compare across timeline.. we want to leave artifacts..

@fkleuver
Copy link
Member Author

fkleuver commented Nov 9, 2018

I think there should be a workflow.. to "add" vnext by tag..

Yep, if you could get a working setup in for starters then I can probably easily take care of that piece of automation.

but if we want to compare across timeline.. we want to leave artifacts..

We do probably want to leave artifacts anyway. I think they can live in a performance branch or something just fine. But ultimately that's a really small implementation detail I can take care of. It would be similar to the existing publish scripts that push the artifacts to the dev branch before publishing.

But if you feel this complicates things a bit too much I've no issues leaving that bit out for now. It's much more important to have a benchmark running regularly in the first place. We can just start with plain hard coded vnext@dev version vs vcurrent@latest comparison.

@EisenbergEffect EisenbergEffect modified the milestones: 0.8.0, 0.5.0 Jul 31, 2019
@EisenbergEffect
Copy link
Contributor

This is done now, right @fkleuver ?

@fkleuver
Copy link
Member Author

fkleuver commented Aug 1, 2019

Not quite yet. We still need the performance report to include latest and dev channels so we got a direct comparison of the impact of the current PR's changes

@brandonseydel
Copy link
Member

@fkleuver are we waiting on the azure/cosmos setup or can we close?

@fkleuver fkleuver added this to Triage in Work queue Oct 7, 2019
@fkleuver fkleuver moved this from Triage to Backlog in Work queue Oct 7, 2019
@fkleuver fkleuver removed QA labels Oct 7, 2019
@fkleuver fkleuver moved this from Backlog to In progress in Work queue Oct 7, 2019
@fkleuver fkleuver added the Topic: Build/CI/CD Only for internal issues affecting Aurelia team label Oct 7, 2019
@fkleuver fkleuver moved this from In progress to Backlog in Work queue Jul 17, 2020
@fkleuver fkleuver modified the milestones: v2.0-alpha, Backlog Jul 17, 2020
@fkleuver fkleuver closed this as completed Feb 1, 2021
Work queue automation moved this from Backlog to Done Feb 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Topic: Build/CI/CD Only for internal issues affecting Aurelia team
Projects
Work queue
  
Done
Development

No branches or pull requests

4 participants