Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPCLOUD-2409: blocked-edges/4.14*: Declare AzureDefaultVMType #4541

Merged
merged 1 commit into from
Dec 20, 2023

Conversation

wking
Copy link
Member

@wking wking commented Dec 20, 2023

Generated by writing the 4.14.1 risk by hand, and then running:

$ (curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=candidate-4.14&arch=amd64' | jq -r '.nodes[] | .version' | grep '^4[.]14[.]' | grep -v '^4[.]14[.][01]$') | while read VERSION; do sed "s/4.14.1/${VERSION}/" blocked-edges/4.14.1-AzureDefaultVMType.yaml > "blocked-edges/${VERSION}-AzureDefaultVMType.yaml"; done

I also manually added the silent-drop to 4.14.0. We have almost entirely avoided silent drops since growing the ability to declare conditional risks in 4.10. But f0dc7e8 (#4301) decided to use silent drops for 4.13.17 and 4.13.18 to 4.14.0. As described in that commit message, there are trade-offs between silent-drops and an Always risk for those updates. With this commit, we double-down on the silent-drop approach.

Because we're dropping 4.13.19 to 4.14.0 after it has been a supported update for so long, there is a larger risk (than there was for 4.13.17 and 4.13.18 updates) of customers noticing the drop and being confused about where the 4.13.19 to 4.14.0 update went. That's why we developed the conditional update system in the first place.

That risk is mitigated by the fact that 4.14.0 is fairly old by now, with many subsequent 4.14.z that fix a number of other issues. So we do not expect there to be much residual interest in 4.13.19 to 4.14.0 updates.

If it turns out that there is enough "where did 4.13.19 to 4.14.0 go?" support load to warrant a pivot, future work could move us to explicitly-declared risk for all of the issues from 4.13.z to 4.14.0, including the original "4.13.19 adds a guard you want first" that f0dc7e8 was delivering. But the price of that pivot is the:

A silent drop may mean we do not need to support customers who update from 4.13.17 or 18 directly to 4.14.0 and have some mutated SCCs stomped...

trade-off discussed in f0dc7e8.

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 20, 2023
@wking
Copy link
Member Author

wking commented Dec 20, 2023

Testing pre-merge (might take a while to build out your .nodes cache):

$ hack/show-edges.py candidate-4.14 | grep '^4[.]13[.].* 4[.]14[.]' | sort -V
4.13.0 -(risks: AzureDefaultVMType, PreRelease)-> 4.14.0-ec.1
4.13.0 -(risks: AzureDefaultVMType, PreRelease)-> 4.14.0-ec.2
4.13.0 -(risks: AzureDefaultVMType, PreRelease)-> 4.14.0-ec.3
4.13.0-rc.3 -(risks: AzureDefaultVMType, PreRelease)-> 4.14.0-ec.0
4.13.0-rc.3 -(risks: AzureDefaultVMType, PreRelease)-> 4.14.0-ec.1
...
4.13.16 -(risks: AzureDefaultVMType, ConsoleImplicitlyEnabled, PreRelease)-> 4.14.0-rc.6
4.13.17 -(SILENT-BLOCK)-> 4.14.0
4.13.17 -(risks: AzureDefaultVMType, ConsoleImplicitlyEnabled, PreRelease)-> 4.14.0-rc.7
4.13.18 -(SILENT-BLOCK)-> 4.14.0
4.13.18 -(risks: AzureDefaultVMType, ConsoleImplicitlyEnabled, PreRelease)-> 4.14.0-rc.7
4.13.19 -(SILENT-BLOCK)-> 4.14.0
4.13.19 -(risks: AROBrokenDNSMasq, AzureDefaultVMType)-> 4.14.2
...
4.13.19 -(risks: AROBrokenDNSMasq, AzureDefaultVMType)-> 4.14.7
4.13.19 -(risks: AzureDefaultVMType, ConsoleImplicitlyEnabled)-> 4.14.1
4.13.21 -(risks: AROBrokenDNSMasq, AzureDefaultVMType)-> 4.14.2
...
4.13.26 -(risks: AROBrokenDNSMasq, AzureDefaultVMType)-> 4.14.6
4.13.26 -(risks: AROBrokenDNSMasq, AzureDefaultVMType)-> 4.14.7
4.13.27 -(risks: AROBrokenDNSMasq, AzureDefaultVMType)-> 4.14.7

So show-edges.py predicts it will work, with the silent-block for 4.13.* -> 4.14.0 and the explict AzureDefaultVMType for all other 4.14.z (although we also probably want to confirm against Cincinnati after this lands).

@wking wking changed the title blocked-edges/4.14*: Declare AzureDefaultVMType OCPCLOUD-2409: blocked-edges/4.14*: Declare AzureDefaultVMType Dec 20, 2023
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Dec 20, 2023
@openshift-ci-robot
Copy link

openshift-ci-robot commented Dec 20, 2023

@wking: This pull request references OCPCLOUD-2409 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.16.0" version, but no target version was set.

In response to this:

Generated by writing the 4.14.1 risk by hand, and then running:

$ (curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=candidate-4.14&arch=amd64' | jq -r '.nodes[] | .version' | grep '^4[.]14[.]' | grep -v '^4[.]14[.][01]$') | while read VERSION; do sed "s/4.14.1/${VERSION}/" blocked-edges/4.14.1-AzureDefaultVMType.yaml > "blocked-edges/${VERSION}-AzureDefaultVMType.yaml"; done

I also manually added the silent-drop to 4.14.0. We have almost entirely avoided silent drops since growing the ability to declare conditional risks in 4.10. But f0dc7e8 (#4301) decided to use silent drops for 4.13.17 and 4.13.18 to 4.14.0. As described in that commit message, there are trade-offs between silent-drops and an Always risk for those updates. With this commit, we double-down on the silent-drop approach.

Because we're dropping 4.13.19 to 4.14.0 after it has been a supported update for so long, there is a larger risk (than there was for 4.13.17 and 4.13.18 updates) of customers noticing the drop and being confused about where the 4.13.19 to 4.14.0 update went. That's why we developed the conditional update system in the first place.

That risk is mitigated by the fact that 4.14.0 is fairly old by now, with many subsequent 4.14.z that fix a number of other issues. So we do not expect there to be much residual interest in 4.13.19 to 4.14.0 updates.

If it turns out that there is enough "where did 4.13.19 to 4.14.0 go?" support load to warrant a pivot, future work could move us to explicitly-declared risk for all of the issues from 4.13.z to 4.14.0, including the original "4.13.19 adds a guard you want first" that f0dc7e8 was delivering. But the price of that pivot is the:

A silent drop may mean we do not need to support customers who update from 4.13.17 or 18 directly to 4.14.0 and have some mutated SCCs stomped...

trade-off discussed in f0dc7e8.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Generated by writing the 4.14.1 risk by hand, and then running:

  $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=candidate-4.14&arch=amd64' | jq -r '.nodes[] | .version' | grep '^4[.]14[.]' | grep -v '^4[.]14[.][01]$' | while read VERSION; do sed "s/4.14.1/${VERSION}/" blocked-edges/4.14.1-AzureDefaultVMType.yaml > "blocked-edges/${VERSION}-AzureDefaultVMType.yaml"; done

I also manually added the silent-drop to 4.14.0.  We have almost
entirely avoided silent drops since growing the ability to declare
conditional risks in 4.10.  But f0dc7e8 (blocked-edges/4.14.0: Drop
updates from 4.13.17 and 18, 2023-10-26, openshift#4301) decided to use silent
drops for 4.13.17 and 4.13.18 to 4.14.0.  As described in that commit
message, there are trade-offs between silent-drops and an Always risk
for those updates.  With this commit, we double-down on the
silent-drop approach.

Because we're dropping 4.13.19 to 4.14.0 after it has been a supported
update for so long, there is a larger risk (than there was for 4.13.17
and 4.13.18 updates) of customers noticing the drop and being confused
about where the 4.13.19 to 4.14.0 update went.  That's why we
developed the conditional update system in the first place.

That risk is mitigated by the fact that 4.14.0 is fairly old by now,
with many subsequent 4.14.z that fix a number of other issues.  So we
do not expect there to be much residual interest in 4.13.19 to 4.14.0
updates.

If it turns out that there is enough "where did 4.13.19 to 4.14.0 go?"
support load to warrant a pivot, future work could move us to
explicitly-declared risk for all of the issues from 4.13.z to 4.14.0,
including the original "4.13.19 adds a guard you want first" that
f0dc7e8 was delivering.  But the price of that pivot is the:

  A silent drop may mean we do not need to support customers who
  update from 4.13.17 or 18 directly to 4.14.0 and have some mutated
  SCCs stomped...

trade-off discussed in f0dc7e8.
@wking
Copy link
Member Author

wking commented Dec 20, 2023

And some trial evaluations of the PromQL:

Old AWS cluster evaluates "does not apply" (because it's not Azure):

image

Switching Azure to AWS in the PromQL, because I don't have an old Azure cluster handy, evaluates "does apply":

image

New AWS cluster evaluates "does not apply" (because it is young and not Azure):

image

New AWS cluster with AWS in the PromQL evaluates "does not apply" (because it's young):

image

So all of that looks good to me.

Copy link
Contributor

openshift-ci bot commented Dec 20, 2023

@wking: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

label_replace(0 * group(cluster_version{_id="",type="initial",version!~"4[.][0-9][.].*"}),"born_by_4_9", "no, born in 4.10 or later", "", "")
)
* on () group_left (type)
topk(1,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we do not exactly need the topk for cluster_infrastructure_provider promql right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We expect to have only one, but 🤷, if there are more than one, this will avoid us breaking on multiple-matches. And if there is only one, topk should be cheap for the PromQL engine to evaluate.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. I agree

Copy link
Member

@LalatenduMohanty LalatenduMohanty left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Dec 20, 2023
Copy link
Contributor

openshift-ci bot commented Dec 20, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: LalatenduMohanty, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [LalatenduMohanty,wking]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-bot openshift-merge-bot bot merged commit 70cd7f6 into openshift:master Dec 20, 2023
5 checks passed
@wking wking deleted the AzureDefaultVMType branch December 20, 2023 20:29
@wking
Copy link
Member Author

wking commented Dec 20, 2023

Checking Cincinnati now that this is merged and live:

$ hack/show-edges.py --cincinnati https://api.openshift.com/api/upgrades_info/graph candidate-4.14 | grep '^4[.]13[.].* 4[.]14[.]' | sort -V
4.13.0 -(risks: AzureDefaultVMType, PreRelease)-> 4.14.0-ec.1
4.13.0 -(risks: AzureDefaultVMType, PreRelease)-> 4.14.0-ec.2
...
4.13.15 -(risks: AzureDefaultVMType, ConsoleImplicitlyEnabled, PreRelease)-> 4.14.0-rc.6
4.13.16 -(risks: AzureDefaultVMType, ConsoleImplicitlyEnabled, PreRelease)-> 4.14.0-rc.6
4.13.17 -(risks: AzureDefaultVMType, ConsoleImplicitlyEnabled, PreRelease)-> 4.14.0-rc.7
4.13.18 -(risks: AzureDefaultVMType, ConsoleImplicitlyEnabled, PreRelease)-> 4.14.0-rc.7
4.13.19 -(risks: AROBrokenDNSMasq, AzureDefaultVMType)-> 4.14.2
4.13.19 -(risks: AROBrokenDNSMasq, AzureDefaultVMType)-> 4.14.3
4.13.19 -(risks: AROBrokenDNSMasq, AzureDefaultVMType)-> 4.14.4
...
4.13.26 -(risks: AROBrokenDNSMasq, AzureDefaultVMType)-> 4.14.7
4.13.27 -(risks: AROBrokenDNSMasq, AzureDefaultVMType)-> 4.14.7

So that looks good. When consuming Cincinnati (instead of the local blocked-edges directory), show-edges doesn't have access to render the SILENT-BLOCK lines we saw locally. But that's what silent blocking is :) And we don't see any 4.13-to-4.14 declared in Cincinnati without including the AzureDefaultVMType risk.

wking added a commit to wking/cincinnati-graph-data that referenced this pull request Dec 21, 2023
Miguel points out that the exposure set is more complicated [1] than
what I'd done in 45eb9ea (blocked-edges/4.14*: Declare
AzureDefaultVMType, openshift#4541).  It's:

* Azure born in 4.8 or earlier are exposed.  Both ARO (which creates
  clusters with Hive?) and clusters created via openshift-installer.
* ARO clusters created in 4.13 and earlier are exposed.

Generated by updating the 4.14.1 risk by hand, and then running:

  $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=candidate-4.14&arch=amd64' | jq -r '.nodes[] | .version' | grep '^4[.]14[.]' | grep -v '^4[.]14[.][01]$' | while read VERSION; do sed "s/4.14.1/${VERSION}/" blocked-edges/4.14.1-AzureDefaultVMType.yaml > "blocked-edges/${VERSION}-AzureDefaultVMType.yaml"; done

Breaking down the logic for my new PromQL:

a. First stanza, using topk is likely unecessary, but if we do happen
   to have multiple matches for some reason, we'll take the highest.
   That gives us a "we match" 1 (if any aggregated entries were 1) or
   a "we don't match" (if they were all 0), instead of "we're having a
   hard time figuring out" Recommended=Unknown.

   a. If the cluster is ARO (using cluster_operator_conditions, as in
      ba09198 (MCO-958: Blocking edges to 4.14.2+ and 4.13.25+, 2023-12-15,
      openshift#4524), first stanza is 1.  Otherwise, 'or' falls back to...

   b. Nested block, again with the cautious topk:

      a. If there are no cluster_operator_conditions, don't return a
         time series.  This ensures that "we didn't match a.a, but we
         might be ARO, and just have cluster_operator_conditions
         aggregation broken" gives us a Recommended=Unknown evaluation
         failure.

      b. Nested block, again with the cautious topk:

         a. born_by_4_9 yes case, with 4.(<=9) instead of the desired
            4.(<=8) because of the "old CVO bugs make it hard to
            distinguish between 4.(<=9) birth-versions" issue
            discussed in 034fa01 (blocked-edges/4.12.*: Declare
            AWSOldBootImages, 2022-12-14, openshift#2909).  Otherwise, 'or'
            falls back to...

         b. A check to ensure cluster_version{type="initial"} is
            working.  This ensures that "we didn't match the a.b.b.a
            born_by_4_9 yes case, but we be that old, and just have
            cluster_version aggregation broken" gives us a
            Recommended=Unknown evaluation failure.

b. Second stanza, again with the cautious topk:

   a. cluster_infrastructure_provider is Azure.  Otherwise, 'or' falls
      back to...

   b. If there are no cluster_infrastructure_provider, don't return a
      time series.  This ensures that "we didn't match b.a, but we
      might be Azure, and just have cluster_infrastructure_provider
      aggregation broken" gives us a Recommended=Unknown evaluation
      failure.

So walking some cases:

* Non-Azure cluster, cluster_operator_conditions, cluster_version, and
  cluster_infrastructure_provider all working:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b could be 1 or 0 for cluster_version.
  * b.a matches no series (not Azure).
  * b.b gives 0 (confirming cluster_infrastructure_provider is working).
  * (1 or 0) * 0 = 0, cluster does not match.
* Non-Azure cluster, cluster_version is broken:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b matches no series (cluster_version is broken).
  * b.a matches no series (not Azure).
  * b.b gives 0 (confirming cluster_infrastructure_provider is working).
  * (no-match) * 0 = no-match, evaluation fails, Recommended=Unknown.
    Admin gets to figure out what's broken with cluster_version and/or
    manually assess their exposure based on the message and linked
    URI.
* Non-ARO Azure cluster born in 4.9, all time-series working:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b.a matches born_by_4_9 yes.
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster born in 4.9, all time-series working:
  * a.a matches (ARO).
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster born in 4.13, all time-series working (this is the case
  I'm fixing with this commit):
  * a.a matches (ARO).
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster, cluster_operator_conditions is broken.
  * a.a matches no series (cluster_operator_conditions) is broken.
  * a.b.a matches no series (cluster_operator_conditions) is broken.
  * b.a matches (Azure)
  * (no-match) * 1 = no-match, evaluation fails, Recommended=Unknown.
* ARO cluster, cluster_infrastructure_provider is broken.
  * a.a matches (ARO).
  * b.a matches no series (cluster_infrastructure_provider) is broken.
  * b.b matches no series (cluster_infrastructure_provider) is broken.
  * 1 * (no-match) = no-match, evaluation fails, Recommended=Unknown.
    We could add logic like a cluster_operator_conditions{name="aro"}
    check to the (b) stanza if we wanted to bakein "all ARO clusters
    are Azure" knowledge to successfully evaluate this case.  But I'd
    guess cluster_infrastructure_provider is working in most ARO
    clusters, and this PromQL is already complicated enough, so I
    haven't bothered with that level of tuning.
* ...lots of other combinations...

[1]: https://issues.redhat.com/browse/OCPCLOUD-2409?focusedId=23694976&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23694976
wking added a commit to wking/cincinnati-graph-data that referenced this pull request Dec 21, 2023
Miguel points out that the exposure set is more complicated [1] than
what I'd done in 45eb9ea (blocked-edges/4.14*: Declare
AzureDefaultVMType, openshift#4541).  It's:

* Azure born in 4.8 or earlier are exposed.  Both ARO (which creates
  clusters with Hive?) and clusters created via openshift-installer.
* ARO clusters created in 4.13 and earlier are exposed.

Generated by updating the 4.14.1 risk by hand, and then running:

  $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=candidate-4.14&arch=amd64' | jq -r '.nodes[] | .version' | grep '^4[.]14[.]' | grep -v '^4[.]14[.][01]$' | while read VERSION; do sed "s/4.14.1/${VERSION}/" blocked-edges/4.14.1-AzureDefaultVMType.yaml > "blocked-edges/${VERSION}-AzureDefaultVMType.yaml"; done

Breaking down the logic for my new PromQL:

a. First stanza, using topk is likely unecessary, but if we do happen
   to have multiple matches for some reason, we'll take the highest.
   That gives us a "we match" 1 (if any aggregated entries were 1) or
   a "we don't match" (if they were all 0), instead of "we're having a
   hard time figuring out" Recommended=Unknown.

   a. If the cluster is ARO (using cluster_operator_conditions, as in
      ba09198 (MCO-958: Blocking edges to 4.14.2+ and 4.13.25+, 2023-12-15,
      openshift#4524), first stanza is 1.  Otherwise, 'or' falls back to...

   b. Nested block, again with the cautious topk:

      a. If there are no cluster_operator_conditions, don't return a
         time series.  This ensures that "we didn't match a.a, but we
         might be ARO, and just have cluster_operator_conditions
         aggregation broken" gives us a Recommended=Unknown evaluation
         failure.

      b. Nested block, again with the cautious topk:

         a. born_by_4_9 yes case, with 4.(<=9) instead of the desired
            4.(<=8) because of the "old CVO bugs make it hard to
            distinguish between 4.(<=9) birth-versions" issue
            discussed in 034fa01 (blocked-edges/4.12.*: Declare
            AWSOldBootImages, 2022-12-14, openshift#2909).  Otherwise, 'or'
            falls back to...

         b. A check to ensure cluster_version{type="initial"} is
            working.  This ensures that "we didn't match the a.b.b.a
            born_by_4_9 yes case, but we be that old, and just have
            cluster_version aggregation broken" gives us a
            Recommended=Unknown evaluation failure.

b. Second stanza, again with the cautious topk:

   a. cluster_infrastructure_provider is Azure.  Otherwise, 'or' falls
      back to...

   b. If there are no cluster_infrastructure_provider, don't return a
      time series.  This ensures that "we didn't match b.a, but we
      might be Azure, and just have cluster_infrastructure_provider
      aggregation broken" gives us a Recommended=Unknown evaluation
      failure.

So walking some cases:

* Non-Azure cluster, cluster_operator_conditions, cluster_version, and
  cluster_infrastructure_provider all working:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b could be 1 or 0 for cluster_version.
  * b.a matches no series (not Azure).
  * b.b gives 0 (confirming cluster_infrastructure_provider is working).
  * (1 or 0) * 0 = 0, cluster does not match.
* Non-Azure cluster, cluster_version is broken:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b matches no series (cluster_version is broken).
  * b.a matches no series (not Azure).
  * b.b gives 0 (confirming cluster_infrastructure_provider is working).
  * (no-match) * 0 = no-match, evaluation fails, Recommended=Unknown.
    Admin gets to figure out what's broken with cluster_version and/or
    manually assess their exposure based on the message and linked
    URI.
* Non-ARO Azure cluster born in 4.9, all time-series working:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b.a matches born_by_4_9 yes.
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster born in 4.9, all time-series working:
  * a.a matches (ARO).
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster born in 4.13, all time-series working (this is the case
  I'm fixing with this commit):
  * a.a matches (ARO).
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster, cluster_operator_conditions is broken.
  * a.a matches no series (cluster_operator_conditions) is broken.
  * a.b.a matches no series (cluster_operator_conditions) is broken.
  * b.a matches (Azure)
  * (no-match) * 1 = no-match, evaluation fails, Recommended=Unknown.
* ARO cluster, cluster_infrastructure_provider is broken.
  * a.a matches (ARO).
  * b.a matches no series (cluster_infrastructure_provider) is broken.
  * b.b matches no series (cluster_infrastructure_provider) is broken.
  * 1 * (no-match) = no-match, evaluation fails, Recommended=Unknown.
    We could add logic like a cluster_operator_conditions{name="aro"}
    check to the (b) stanza if we wanted to bakein "all ARO clusters
    are Azure" knowledge to successfully evaluate this case.  But I'd
    guess cluster_infrastructure_provider is working in most ARO
    clusters, and this PromQL is already complicated enough, so I
    haven't bothered with that level of tuning.
* ...lots of other combinations...

[1]: https://issues.redhat.com/browse/OCPCLOUD-2409?focusedId=23694976&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23694976
wking added a commit to wking/cincinnati-graph-data that referenced this pull request Dec 21, 2023
Miguel points out that the exposure set is more complicated [1] than
what I'd done in 45eb9ea (blocked-edges/4.14*: Declare
AzureDefaultVMType, openshift#4541).  It's:

* Azure born in 4.8 or earlier are exposed.  Both ARO (which creates
  clusters with Hive?) and clusters created via openshift-installer.
* ARO clusters created in 4.13 and earlier are exposed.

Generated by updating the 4.14.1 risk by hand, and then running:

  $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=candidate-4.14&arch=amd64' | jq -r '.nodes[] | .version' | grep '^4[.]14[.]' | grep -v '^4[.]14[.][01]$' | while read VERSION; do sed "s/4.14.1/${VERSION}/" blocked-edges/4.14.1-AzureDefaultVMType.yaml > "blocked-edges/${VERSION}-AzureDefaultVMType.yaml"; done

Breaking down the logic for my new PromQL:

a. First stanza, using topk is likely unecessary, but if we do happen
   to have multiple matches for some reason, we'll take the highest.
   That gives us a "we match" 1 (if any aggregated entries were 1) or
   a "we don't match" (if they were all 0), instead of "we're having a
   hard time figuring out" Recommended=Unknown.

   a. If the cluster is ARO (using cluster_operator_conditions, as in
      ba09198 (MCO-958: Blocking edges to 4.14.2+ and 4.13.25+, 2023-12-15,
      openshift#4524), first stanza is 1.  Otherwise, 'or' falls back to...

   b. Nested block, again with the cautious topk:

      a. If there are no cluster_operator_conditions, don't return a
         time series.  This ensures that "we didn't match a.a, but we
         might be ARO, and just have cluster_operator_conditions
         aggregation broken" gives us a Recommended=Unknown evaluation
         failure.

      b. Nested block, again with the cautious topk:

         a. born_by_4_9 yes case, with 4.(<=9) instead of the desired
            4.(<=8) because of the "old CVO bugs make it hard to
            distinguish between 4.(<=9) birth-versions" issue
            discussed in 034fa01 (blocked-edges/4.12.*: Declare
            AWSOldBootImages, 2022-12-14, openshift#2909).  Otherwise, 'or'
            falls back to...

         b. A check to ensure cluster_version{type="initial"} is
            working.  This ensures that "we didn't match the a.b.b.a
            born_by_4_9 yes case, but we be that old, and just have
            cluster_version aggregation broken" gives us a
            Recommended=Unknown evaluation failure.

b. Second stanza, again with the cautious topk:

   a. cluster_infrastructure_provider is Azure.  Otherwise, 'or' falls
      back to...

   b. If there are no cluster_infrastructure_provider, don't return a
      time series.  This ensures that "we didn't match b.a, but we
      might be Azure, and just have cluster_infrastructure_provider
      aggregation broken" gives us a Recommended=Unknown evaluation
      failure.

All of the _id filtering is for use in hosted clusters or other PromQL
stores that include multiple clusters.  More background in 5cb2e93
(blocked-edges/4.11.*-KeepalivedMulticastSkew: Explicit _id="",
2023-05-09, openshift#3591).

So walking some cases:

* Non-Azure cluster, cluster_operator_conditions, cluster_version, and
  cluster_infrastructure_provider all working:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b could be 1 or 0 for cluster_version.
  * b.a matches no series (not Azure).
  * b.b gives 0 (confirming cluster_infrastructure_provider is working).
  * (1 or 0) * 0 = 0, cluster does not match.
* Non-Azure cluster, cluster_version is broken:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b matches no series (cluster_version is broken).
  * b.a matches no series (not Azure).
  * b.b gives 0 (confirming cluster_infrastructure_provider is working).
  * (no-match) * 0 = no-match, evaluation fails, Recommended=Unknown.
    Admin gets to figure out what's broken with cluster_version and/or
    manually assess their exposure based on the message and linked
    URI.
* Non-ARO Azure cluster born in 4.9, all time-series working:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b.a matches born_by_4_9 yes.
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster born in 4.9, all time-series working:
  * a.a matches (ARO).
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster born in 4.13, all time-series working (this is the case
  I'm fixing with this commit):
  * a.a matches (ARO).
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster, cluster_operator_conditions is broken.
  * a.a matches no series (cluster_operator_conditions) is broken.
  * a.b.a matches no series (cluster_operator_conditions) is broken.
  * b.a matches (Azure)
  * (no-match) * 1 = no-match, evaluation fails, Recommended=Unknown.
* ARO cluster, cluster_infrastructure_provider is broken.
  * a.a matches (ARO).
  * b.a matches no series (cluster_infrastructure_provider) is broken.
  * b.b matches no series (cluster_infrastructure_provider) is broken.
  * 1 * (no-match) = no-match, evaluation fails, Recommended=Unknown.
    We could add logic like a cluster_operator_conditions{name="aro"}
    check to the (b) stanza if we wanted to bakein "all ARO clusters
    are Azure" knowledge to successfully evaluate this case.  But I'd
    guess cluster_infrastructure_provider is working in most ARO
    clusters, and this PromQL is already complicated enough, so I
    haven't bothered with that level of tuning.
* ...lots of other combinations...

[1]: https://issues.redhat.com/browse/OCPCLOUD-2409?focusedId=23694976&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23694976
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
3 participants