Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QA-Board for performance engineering #11

Open
2 of 6 tasks
arthur-flam opened this issue Jun 9, 2020 · 1 comment
Open
2 of 6 tasks

QA-Board for performance engineering #11

arthur-flam opened this issue Jun 9, 2020 · 1 comment

Comments

@arthur-flam
Copy link
Member

arthur-flam commented Jun 9, 2020

Right now QA-Board focuses on algorithm engineering. Another big area is software performance.

How do people track software performance?

Unit tests are not enough to judge software performance. Some organizations:

  • track their test suit runtime over time. It helps get a trend but comparisons are hard because the tests keeps changing.
  • use acceptance tests that check runtime/memory thresholds, and monitor regressions.

On the ops side, if we're talking about applications/services:

  • there are many great products: monitoring like datadog/newrelic, crash analytics like sentry...
  • smart monitoring solutions correlate anomalies with commits and feature flags.
  • the "future" is likely tooling based on canary deploys to identify perf regressions on real workflows.

For libraries or products used as dependencies by others, it's not possible to setup those tools. Could QA-Board help "shift-left" and help identify issues before releases?

Development workflows for performance engineering

  • Engineers doing optimization have a hard time keeping track of all their versions and microbenchmarks. The tooling is focused on the live experience (debuggers-like, checking the assembly) and investigate one version at a time.
  • To keep track, the best tool I've seen to identify issues ahead of time and help during coding is https://perf.rust-lang.org

Software engineers have the same need for "run tracking" as algorithm engineers.

Features needed

  • Examples of integrations with tools such as perf.
  • Visualizations:
  • Examples of visualizations of metrics like binary size, IPC, time, page faults, gas..
  • We could add anomaly detection on top to warn about regressions early.

Reference: perf/profiling tools

@arthur-flam
Copy link
Member Author

We love Brendan Gregg's flame charts and integrated Martin Spier's d3-flame-graph.

At a glance, you can check where you code spends its CPU cycles, and use differential flame graphs to debug regressions:
https://samsung.github.io/qaboard/docs/visualizations/#flame-graphs
image

For now we keep the issue open, we may turn it into a thread or "project"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant