Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ClusterSyncFailingSeconds metric #1809

Merged
merged 2 commits into from
Jun 23, 2022
Merged

Conversation

suhanime
Copy link
Contributor

  • Add gauge "hive_clustersync_failing_seconds" that logs time since the lastTransitionTime of the clusterSync's Failed condition with the label namespaced_name as $namespace/$name
  • Make it optional and log it conditionally if the time since it has been failing is more than the threshold mentioned
  • Skip reporting for Unreachable clusters
  • Clear the metric when the clusterSync succeeds
  • Clear the metric when the clusterSync is deleted
  • Clear the metric when the clusterDeplyment is deleted

xref: https://issues.redhat.com/browse/HIVE-1857

/assign @2uasimojo

- Add gauge "hive_clustersync_failing_seconds" that logs time since the
  lastTransitionTime of the clusterSync's Failed condition with the
  label namespaced_name as $namespace/$name
- Make it optional and log it conditionally if the time since it has
  been failing is more than the threshold mentioned
- Skip reporting for Unreachable clusters
- Clear the metric when the clusterSync succeeds
- Clear the metric when the clusterSync is deleted
- Clear the metric when the clusterDeplyment is deleted
@suhanime suhanime changed the title Hive 1857 Add ClusterSyncFailingSeconds metric Jun 22, 2022
@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 22, 2022
@codecov
Copy link

codecov bot commented Jun 22, 2022

Codecov Report

Merging #1809 (98148ea) into master (7135d78) will increase coverage by 0.02%.
The diff coverage is 47.16%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1809      +/-   ##
==========================================
+ Coverage   41.72%   41.74%   +0.02%     
==========================================
  Files         353      353              
  Lines       33033    33079      +46     
==========================================
+ Hits        13782    13810      +28     
- Misses      18097    18113      +16     
- Partials     1154     1156       +2     
Impacted Files Coverage Δ
pkg/controller/metrics/metrics.go 33.14% <17.24%> (-0.77%) ⬇️
pkg/controller/utils/conditions.go 72.15% <80.00%> (+0.08%) ⬆️
.../clusterdeployment/clusterdeployment_controller.go 62.71% <83.33%> (+<0.01%) ⬆️
...g/controller/clustersync/clustersync_controller.go 75.84% <83.33%> (+0.06%) ⬆️
...g/controller/hibernation/hibernation_controller.go 65.34% <100.00%> (ø)
pkg/clusterresource/builder.go 67.12% <0.00%> (+0.74%) ⬆️

Copy link
Member

@2uasimojo 2uasimojo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have looked at this.

The code to declare and observe the new metric looks good ✓

However, I'm leery of the refactor, especially in such a foundational controller, especially since it does change some logic flow/order... and I don't think we actually need to do it. IIUC it's mainly for the purpose of being able to pass the clustersync around; and we only need to do that so we can scrub the metric for it; and the only information we need to do that is the namespace and name. So let's instead take advantage of the fact that the clustersync namespace and name are identical to that of the CD (ref -- we take advantage of this elsewhere too, so it's not likely to change) and not even bother passing the clustersync around.

// In case the clusterSync has been deleted at this point, clear the clusterSyncFailing metric
clustersync.ClearClusterSyncFailingSecondsMetric(clusterSync, cdLog)
} else {
cdLog.WithError(err).Log(controllerutils.LogLevel(err), "could not get ClusterSync")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By not erroring here, we're passing an empty clustersync around. Hopefully the worst that will happen is we'll clear a metric with an empty label, which should be a no-op, but it's not future-proof.

Since the clustersync name and namespace are the same as
clusterdeployment name and namespace, utilize that logic to eliminate
unnecessary fetching of clustersync
@suhanime
Copy link
Contributor Author

I have looked at this.

The code to declare and observe the new metric looks good ✓

However, I'm leery of the refactor, especially in such a foundational controller, especially since it does change some logic flow/order... and I don't think we actually need to do it. IIUC it's mainly for the purpose of being able to pass the clustersync around; and we only need to do that so we can scrub the metric for it; and the only information we need to do that is the namespace and name. So let's instead take advantage of the fact that the clustersync namespace and name are identical to that of the CD (ref -- we take advantage of this elsewhere too, so it's not likely to change) and not even bother passing the clustersync around.

This makes way more sense - fixed

Copy link
Member

@2uasimojo 2uasimojo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Thanks @suhanime!

Comment on lines +453 to 465
// ShouldLogHistogramDurationMetric decides whether the corresponding duration metric of type histogram should be logged.
// It first checks if that optional metric has been opted into for logging, and the duration has crossed the threshold set.
func ShouldLogHistogramDurationMetric(metricToLog *prometheus.HistogramVec, timeToLog float64) bool {
duration, ok := mapMetricToDurationHistograms[metricToLog]
return ok && duration.Seconds() <= timeToLog
}

// ShouldLogGaugeDurationMetric decides whether the corresponding duration metric of type gauge should be logged. It
// first checks if that optional metric has been opted into for logging, and the duration has crossed the threshold set.
func ShouldLogGaugeDurationMetric(metricToLog *prometheus.GaugeVec, timeToLog float64) bool {
duration, ok := mapMetricToDurationGauges[metricToLog]
return ok && duration.Seconds() <= timeToLog
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NB: I think we'll be able to collapse the maps and these funcs using generics once #1803 lands :)

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jun 23, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 23, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: 2uasimojo, suhanime

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 23, 2022

@suhanime: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-ci openshift-ci bot merged commit afcd301 into openshift:master Jun 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants