Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: Verify platform metrics are available #24117

Merged

Conversation

smarterclayton
Copy link
Contributor

Ensure there are no regressions.

@openshift-ci-robot openshift-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Nov 8, 2019
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: smarterclayton

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 8, 2019
Ensure there are no regressions.
@lilic
Copy link
Contributor

lilic commented Nov 9, 2019

/test e2e-gcp

Copy link
Contributor

@lilic lilic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds reasonable to me.

Restarted the failed gcp test, curious if any of the metrics made it fail because it specifically run on gcp?

`cluster_feature_set`: true,

// track installer type
`cluster_installer{type!="",invoker!=""}`: true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come we do not check that this value is > 0 as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point

@openshift-ci-robot openshift-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Nov 9, 2019
The old logic waited for prometheus to come up. That should no longer
be necessary as we wait for cluster bringup before e2e tests are run.

When a particular query is failing repeatedly, there is no need
to print the error multiple times. Also check for unmarshal failures
and check the status of the query, and be sure to print a newline.
@s-urbaniak
Copy link
Contributor

just one little concern from my side: Is this asserting a functioning monitoring stack? If not I suggest to create a dedicated platforms e2e test folder.

@smarterclayton
Copy link
Contributor Author

Openshift e2e requires a cluster monitoring stack (non optional part), unless you mean something different?

@smarterclayton
Copy link
Contributor Author

/retest

With Michal’s fixes

@smarterclayton
Copy link
Contributor Author

/retest

1 similar comment
@smarterclayton
Copy link
Contributor Author

/retest

@smarterclayton smarterclayton added the lgtm Indicates that a PR is ready to be merged. label Nov 22, 2019
@smarterclayton
Copy link
Contributor Author

To catch regressions

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

2 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit cc2707f into openshift:master Nov 22, 2019
wking added a commit to wking/origin that referenced this pull request Dec 2, 2019
Since 10c6be0 (test: Prometheus query test should fail more
quickly, 2019-11-09, openshift#24117), we've been failing after only a few
seconds of failures, which causes problems like [1]:

  fail [github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:156]: Expected
      <map[string]error | len:1>: {
          "openshift_build_total{phase=\"Complete\"} >= 0": {
              s: "promQL query: openshift_build_total{phase=\"Complete\"} >= 0 had reported incorrect results: model.Vector{}",
          },
      }
  to be empty
  ...
  failed: (1m4s) 2019-11-26T23:05:24 "[Feature:Prometheus][Feature:Builds] Prometheus when installed on the cluster should start and expose a secured proxy and verify build metrics [Suite:openshift/conformance/parallel]"

when we haven't waited long enough for the Prometheus scrape to notice
the new builds.  Looking at the timing in that job:

  $ curl -s https://storage.googleapis.com/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-4.4/73/build-log.txt | grep 'openshift_build_total.*Complete' | cut -b -50
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:08.772: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:09.480: INFO: stderr: "+ curl -s -k -
  Nov 26 23:05:09.480: INFO: promQL query: openshift
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:10.481: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:11.121: INFO: stderr: "+ curl -s -k -
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:12.122: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:12.751: INFO: stderr: "+ curl -s -k -
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:13.751: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:14.356: INFO: stderr: "+ curl -s -k -
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:15.356: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:15.922: INFO: stderr: "+ curl -s -k -
          "openshift_build_total{phase=\"Complete\"}
              s: "promQL query: openshift_build_tota

so we had five queries over ~7s.  With this commit, we'll wait at
least 10s between retries, for a minium duration of 40s between the
first and fifth attempt, which should give us long enough to include
at least one scrape (scrapes every 30s [2]).

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1777189
[2]: https://bugzilla.redhat.com/show_bug.cgi?id=1777189#c3
wking added a commit to wking/origin that referenced this pull request Dec 2, 2019
Since 10c6be0 (test: Prometheus query test should fail more
quickly, 2019-11-09, openshift#24117), we've been failing after only a few
seconds of failures, which causes problems like [1]:

  fail [github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:156]: Expected
      <map[string]error | len:1>: {
          "openshift_build_total{phase=\"Complete\"} >= 0": {
              s: "promQL query: openshift_build_total{phase=\"Complete\"} >= 0 had reported incorrect results: model.Vector{}",
          },
      }
  to be empty
  ...
  failed: (1m4s) 2019-11-26T23:05:24 "[Feature:Prometheus][Feature:Builds] Prometheus when installed on the cluster should start and expose a secured proxy and verify build metrics [Suite:openshift/conformance/parallel]"

when we haven't waited long enough for the Prometheus scrape to notice
the new builds.  Looking at the timing in that job:

  $ curl -s https://storage.googleapis.com/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-4.4/73/build-log.txt | grep 'openshift_build_total.*Complete' | cut -b -50
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:08.772: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:09.480: INFO: stderr: "+ curl -s -k -
  Nov 26 23:05:09.480: INFO: promQL query: openshift
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:10.481: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:11.121: INFO: stderr: "+ curl -s -k -
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:12.122: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:12.751: INFO: stderr: "+ curl -s -k -
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:13.751: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:14.356: INFO: stderr: "+ curl -s -k -
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:15.356: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:15.922: INFO: stderr: "+ curl -s -k -
          "openshift_build_total{phase=\"Complete\"}
              s: "promQL query: openshift_build_tota

so we had five queries over ~7s.  With this commit, we'll wait at
least 10s between retries, for a minium duration of 40s between the
first and fifth attempt, which should give us long enough to include
at least one scrape (scrapes every 30s [2]).

Also rename maxPrometheusQueryRetries to maxPrometheusQueryAttempts,
because this count also includes the initial, non-retry attempt.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1777189
[2]: https://bugzilla.redhat.com/show_bug.cgi?id=1777189#c3
wking added a commit to wking/origin that referenced this pull request Dec 2, 2019
Since 10c6be0 (test: Prometheus query test should fail more
quickly, 2019-11-09, openshift#24117), we've been failing after only a few
seconds of failures, which causes problems like [1]:

  fail [github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:156]: Expected
      <map[string]error | len:1>: {
          "openshift_build_total{phase=\"Complete\"} >= 0": {
              s: "promQL query: openshift_build_total{phase=\"Complete\"} >= 0 had reported incorrect results: model.Vector{}",
          },
      }
  to be empty
  ...
  failed: (1m4s) 2019-11-26T23:05:24 "[Feature:Prometheus][Feature:Builds] Prometheus when installed on the cluster should start and expose a secured proxy and verify build metrics [Suite:openshift/conformance/parallel]"

when we haven't waited long enough for the Prometheus scrape to notice
the new builds.  Looking at the timing in that job:

  $ curl -s https://storage.googleapis.com/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-4.4/73/build-log.txt | grep 'openshift_build_total.*Complete' | cut -b -50
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:08.772: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:09.480: INFO: stderr: "+ curl -s -k -
  Nov 26 23:05:09.480: INFO: promQL query: openshift
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:10.481: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:11.121: INFO: stderr: "+ curl -s -k -
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:12.122: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:12.751: INFO: stderr: "+ curl -s -k -
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:13.751: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:14.356: INFO: stderr: "+ curl -s -k -
  STEP: perform prometheus metric query openshift_bu
  Nov 26 23:05:15.356: INFO: Running '/usr/bin/kubec
  Nov 26 23:05:15.922: INFO: stderr: "+ curl -s -k -
          "openshift_build_total{phase=\"Complete\"}
              s: "promQL query: openshift_build_tota

so we had five queries over ~7s.  With this commit, we'll wait at
least 10s between retries, for a minium duration of 40s between the
first and fifth attempt, which should give us long enough to include
at least one scrape (scrapes every 30s [2]).

Also rename maxPrometheusQueryRetries to maxPrometheusQueryAttempts,
because this count also includes the initial, non-retry attempt.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1777189
[2]: https://bugzilla.redhat.com/show_bug.cgi?id=1777189#c3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants