You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What can we do to make the development experience better for custom metrics? I'm specifically interested to improve:
code reusability across custom metric files
documentation and discoverability of metrics
unit testing and acceptance testing on WPT
interoperability with 3P scripts
For example, what if we wrote smaller, more maintainable JS modules and ran a build process to generate the custom metric files? Modules could be shared across metrics. We could import third party scripts via npm. We could run automated unit testing or use the WPT API to get live data. The source JS could be written with JSDoc syntax to help generate documentation to make the metrics easier to understand and discover.
This could be done in a separate directory of an existing repo (legacy?) or we could create a new repo and keep the generated metrics in sync.
Leave a comment if you have any other ideas to make the custom metric development experience better or let me know what you think of these suggestions.
The text was updated successfully, but these errors were encountered:
On a related point it’s actually not that cheap to query the custom metrics since they are buried in the page JSON so now costs about 800 GB to query - obviously a lot better than Response Bodies but still a lot. Should we have a summary_pages table with the custom metrics section on the JSON in its own column?
rviscomi
transferred this issue from HTTPArchive/almanac.httparchive.org
Jan 12, 2022
The custom metric scripts at https://github.com/HTTPArchive/legacy.httparchive.org/tree/master/custom_metrics are growing in number and complexity. For example almanac.js was written in 2019 and shared common functions across all chapters' metrics. Now, we have many chapter-specific metrics and it's getting harder to keep track of things.
@OBTo started great work on documenting each metric in metric-summary.md.
What can we do to make the development experience better for custom metrics? I'm specifically interested to improve:
For example, what if we wrote smaller, more maintainable JS modules and ran a build process to generate the custom metric files? Modules could be shared across metrics. We could import third party scripts via npm. We could run automated unit testing or use the WPT API to get live data. The source JS could be written with JSDoc syntax to help generate documentation to make the metrics easier to understand and discover.
This could be done in a separate directory of an existing repo (legacy?) or we could create a new repo and keep the generated metrics in sync.
Leave a comment if you have any other ideas to make the custom metric development experience better or let me know what you think of these suggestions.
The text was updated successfully, but these errors were encountered: