-
Notifications
You must be signed in to change notification settings - Fork 12.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI] Upload average CPU utilization of CI jobs to DataDog #125771
base: master
Are you sure you want to change the base?
Conversation
@bors try |
[CI] Upload CI metrics to DataDog This PR tests integration of Datadog CI custom metrics. My goal is to take various useful statistics (like the duration of stage 1 build, stage 2 build, and test execution) that we already gather into `metrics.json` and upload them as [custom metrics](https://docs.datadoghq.com/continuous_integration/pipelines/custom_commands/) into Datadog, so that we can observe job duration on a more granular level. r? `@jdno`
☀️ Try build successful - checks-actions |
608d825
to
ddae1fa
Compare
@bors try |
[CI] Upload CI metrics to DataDog This PR tests integration of Datadog CI custom metrics. My goal is to take various useful statistics (like the duration of stage 1 build, stage 2 build, and test execution) that we already gather into `metrics.json` and upload them as [custom metrics](https://docs.datadoghq.com/continuous_integration/pipelines/custom_commands/) into Datadog, so that we can observe job duration on a more granular level. r? `@jdno`
☀️ Try build successful - checks-actions |
ddae1fa
to
8934848
Compare
@bors try |
[CI] Upload CI metrics to DataDog This PR tests integration of Datadog CI custom metrics. My goal is to take various useful statistics (like the duration of stage 1 build, stage 2 build, and test execution) that we already gather into `metrics.json` and upload them as [custom metrics](https://docs.datadoghq.com/continuous_integration/pipelines/custom_commands/) into Datadog, so that we can observe job duration on a more granular level. r? `@jdno`
2343ecd
to
870653f
Compare
@bors try |
[CI] Upload average CPU utilization of CI jobs to DataDog This PR adds a new CI step that uploads the average CPU utilization of the current GH job to Datadog. I want to add more metrics in follow-up PRs. r? `@jdno`
This comment has been minimized.
This comment has been minimized.
a7fbb47
to
f7acf06
Compare
f7acf06
to
95db961
Compare
@bors try |
[CI] Upload average CPU utilization of CI jobs to DataDog This PR adds a new CI step that uploads the average CPU utilization of the current GH job to Datadog. I want to add more metrics in follow-up PRs. r? `@jdno`
☀️ Try build successful - checks-actions |
@rustbot ready |
DATADOG_API_KEY: ${{ secrets.DATADOG_API_KEY }} | ||
DD_GITHUB_JOB_NAME: ${{ matrix.name }} | ||
run: | | ||
npm install -g @datadog/datadog-ci |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this npm install
be pinned in order to prevent new version breakage?
maybe something like this npm install -g @datadog/datadog-ci@2.38.x
or npm install -g @datadog/datadog-ci@^2.x.x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, I was also thinking about that. If we do pin, the version should stay the same (even though transitive dependencies can still be updated without a lockfile), but the command might eventually no longer be supported by DataDog (meaning its internal API can change), so it might also break our CI in the future. If we don't pin, then it can break if the external CLI changes.
I'm fine with pinning.
This PR adds a new CI step that uploads the average CPU utilization of the current GH job to Datadog. I want to add more metrics in follow-up PRs.
r? @jdno