The story we want to tell is we increased productivity using ML and various productivity tools.
For that we need a good metric for toil/productivity.
The ideal metric would be something like time to resolve/close issues but this will have high variability and it will be difficult to iterate since we have to wait a long time.
I think a good proxy metric would be time to triage issues.
kubeflow/community#280 proposes a well defined definition for when to consider an issue has been triaged.
Using that criterion we can easily measure how long it took for an issue to get triaged.
We can also look at how many interactions by a human (e.g. comments) were required for an issue to be triaged.
We can easily backfill that metric for all previous issues.
So we could then compute stats like average and mean across issues in some rolling window.
The text was updated successfully, but these errors were encountered:
Issue Label Bot is not confident enough to auto-label this issue. See dashboard for more details.
Sorry, something went wrong.
No branches or pull requests