Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-33067: Don't fatal error when filter cannot iterate #509

Conversation

yuumasato
Copy link
Member

@yuumasato yuumasato commented Apr 26, 2024

A jq filter may expect to iterate over a list of results, but it can happen that no result is returned.
Let's not fatal error when this happens.

In an HyperShift environment, when no MC config exists, the following error occurs:

$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers
...
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Error while filtering: cannot iterate over: null
debug: Persisting warnings to output file
FATAL:Error fetching resources: couldn't filter '{
  "metadata": {},
  "items": null 
}': cannot iterate over: null

After creating a dummy MachineConfig, the URI fetching succeeds:

$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers
...
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.config.storage.luks[0].clevis != null)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
Fetching URI: '/apis/machine.openshift.io/v1beta1/machinesets?limit=500'

@yuumasato yuumasato requested review from Vincent056 and xiaojiey and removed request for BhargaviGudi April 26, 2024 15:48
Copy link

@Vincent056 Vincent056 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@Vincent056
Copy link

Fetching URI: '/api/v1/namespaces/-/pods?labelSelector=app%3Dkube-controller-manager'
FATAL:Error fetching resources: couldn't filter '{"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"94867"},"items":[]}
': cannot iterate over: null

I wonder if we should exclude this error, it seems like we should fail here because items should never be empty

@xiaojiey
Copy link
Collaborator

/retest-required

@xiaojiey
Copy link
Collaborator

/hold for test

@xiaojiey
Copy link
Collaborator

xiaojiey commented Apr 28, 2024

Still got the same failure with/without ComplianceAsCode/content#11906.
Verified with a hypershift hosted cluster + payload 4.16.0-0.nightly-2024-04-26-145258 + co code in #509 + with/without code in ComplianceAsCode/content#11906:

  1. Create a ssb with upstream-ocp4-pci-dss profile(from OCP Update variable filter to consider go_template content#11906):
% cat ssb_pci_u.yaml 
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: ocp4-pci-dss-u
  namespace: openshift-compliance
profiles:
  - name: upstream-ocp4-pci-dss
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
  name: default
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1
% oc apply -f ~/func/ssb_pci_u.yaml 
scansettingbinding.compliance.openshift.io/ocp4-pci-dss-u created
% oc get suite -w
NAME             PHASE     RESULT
ocp4-pci-dss-d   RUNNING   NOT-AVAILABLE
ocp4-pci-dss-u   RUNNING   NOT-AVAILABLE
^C%                                                                                                                                                                                                                  % oc get pod
NAME                                                       READY   STATUS                  RESTARTS      AGE
compliance-operator-6bcb4bf785-4gwmj                       1/1     Running                 0             33m
ocp4-openshift-compliance-pp-784dc44c8c-hn5bb              1/1     Running                 0             33m
rhcos4-openshift-compliance-pp-794d6bc5b5-4mlcm            1/1     Running                 0             33m
upstream-ocp4-openshift-compliance-pp-578d4789f9-qd54z     1/1     Running                 0             8m29s
upstream-ocp4-pci-dss-api-checks-pod                       0/2     Init:CrashLoopBackOff   5 (18s ago)   3m59s
upstream-ocp4-pci-dss-rs-8698d97cf5-bj6d2                  1/1     Running                 0             3m59s
upstream-rhcos4-openshift-compliance-pp-5ffbc9d7ff-vhgmm   1/1     Running                 0             8m28s
% oc logs pod/upstream-ocp4-pci-dss-api-checks-pod --all-containers
...
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
FATAL:Error fetching resources: couldn't filter '{
  "metadata": {},
  "items": null
}': cannot iterate over: null
Error from server (BadRequest): container "log-collector" in pod "upstream-ocp4-pci-dss-api-checks-pod" is waiting to start: PodInitializing
  1. Create a ssb with ocp4-pci-dss:
% cat ssb_pci_d.yaml 
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: ocp4-pci-dss-d
  namespace: openshift-compliance
profiles:
  - name: ocp4-pci-dss
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
  name: default
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1
% oc apply -f ssb_pci_d.yaml  
scansettingbinding.compliance.openshift.io/ocp4-pci-dss-d created
% oc get suite
NAME             PHASE     RESULT
ocp4-pci-dss-d   RUNNING   NOT-AVAILABLE
ocp4-pci-dss-u   RUNNING   NOT-AVAILABLE
% oc get pod
NAME                                                       READY   STATUS                  RESTARTS        AGE
compliance-operator-6bcb4bf785-4gwmj                       1/1     Running                 0               43m
ocp4-openshift-compliance-pp-784dc44c8c-hn5bb              1/1     Running                 0               43m
ocp4-pci-dss-api-checks-pod                                0/2     Init:Error              1 (16s ago)     23s
ocp4-pci-dss-rs-5fd8b89b49-t5rw9                           1/1     Running                 0               23s
rhcos4-openshift-compliance-pp-794d6bc5b5-4mlcm            1/1     Running                 0               43m
upstream-ocp4-openshift-compliance-pp-578d4789f9-qd54z     1/1     Running                 0               18m
upstream-ocp4-pci-dss-api-checks-pod                       0/2     Init:CrashLoopBackOff   7 (2m24s ago)   14m
upstream-ocp4-pci-dss-rs-8698d97cf5-bj6d2                  1/1     Running                 0               14m
upstream-rhcos4-openshift-compliance-pp-5ffbc9d7ff-vhgmm   1/1     Running                 0               18m
oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers
...
Fetching URI: '/api/v1/namespaces/-/pods?labelSelector=app%3Dkube-controller-manager'
FATAL:Error fetching resources: couldn't filter '{"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"156778"},"items":[]}
': cannot iterate over: null
Error from server (BadRequest): container "log-collector" in pod "ocp4-pci-dss-api-checks-pod" is waiting to start: PodInitializing

@yuumasato
Copy link
Member Author

yuumasato commented Apr 29, 2024

Fetching URI: '/api/v1/namespaces/-/pods?labelSelector=app%3Dkube-controller-manager'
FATAL:Error fetching resources: couldn't filter '{"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"94867"},"items":[]}
': cannot iterate over: null

I wonder if we should exclude this error, it seems like we should fail here because items should never be empty

@Vincent056 But the error happens when trying to fetch machineconfigs. On 4.16 hypershift there are no machineconfigs, zero numbered or rendered machineconfigs.

Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'                                                                                      
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Error while filtering: cannot iterate over: null                        
debug: Persisting warnings to output file                                      
FATAL:Error fetching resources: couldn't filter '{                             
  "metadata": {},            
  "items": null                                                                                                                                                
}': cannot iterate over: null          
Error from server (BadRequest): container "log-collector" in pod "upstream-ocp4-pci-dss-api-checks-pod" is waiting to start: PodInitializing

@xiaojiey Do you know if on 4.15 or older is there any machineconfig?
Also, do you know if CO 1.4.0 works on 4.15 hypershift?

@BhargaviGudi
Copy link
Collaborator

@yuumasato Compliance-operator-v1.4.0 works as expected on 4.15 hypershift hosted cluster

$ oc get csv
NAME                         DISPLAY               VERSION   REPLACES   PHASE
compliance-operator.v1.4.0   Compliance Operator   1.4.0                Succeeded
$ oc get sub
NAME                  PACKAGE               SOURCE             CHANNEL
compliance-operator   compliance-operator   redhat-operators   stable
$ oc compliance bind -N test profile/ocp4-pci-dss profile/ocp4-pci-dss-node
Creating ScanSettingBinding test
$ oc get scan
NAME                       PHASE   RESULT
ocp4-pci-dss               DONE    NON-COMPLIANT
ocp4-pci-dss-node-worker   DONE    NON-COMPLIANT
$ oc get pods
NAME                                             READY   STATUS    RESTARTS   AGE
compliance-operator-df9b877bb-s4q75              1/1     Running   0          6m37s
ocp4-openshift-compliance-pp-c9b54f7fc-95zsf     1/1     Running   0          6m29s
rhcos4-openshift-compliance-pp-89dbf5867-7pj7n   1/1     Running   0          6m29s

@yuumasato yuumasato force-pushed the handle-no-objects-to-iterate-on-filter branch from 9a73e58 to d12ed7c Compare April 30, 2024 14:46
@openshift-ci openshift-ci bot removed the lgtm label Apr 30, 2024
@yuumasato
Copy link
Member Author

@BhargaviGudi I have updated the patch.

Also, could you check if CO 1.4.0 works as expected on OCP4.16 too? Thanks a lot..

@yuumasato yuumasato changed the title Don't fatal error when filter cannot iterate OCPBUGS-33067: Don't fatal error when filter cannot iterate Apr 30, 2024
@openshift-ci-robot
Copy link
Collaborator

@yuumasato: This pull request references Jira Issue OCPBUGS-33067, which is invalid:

  • expected the bug to target the "4.16.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

A jq filter may expect to iterate over a list of results, but it can happen that no result is returned.
Let's not fatal error when this happens.

In an HyperShift environment, when no MC config exists, the following error occurs:

$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers
...
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Error while filtering: cannot iterate over: null
debug: Persisting warnings to output file
FATAL:Error fetching resources: couldn't filter '{
 "metadata": {},
 "items": null 
}': cannot iterate over: null

After creating a dummy MachineConfig, the URI fetching succeeds:

$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers
...
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.config.storage.luks[0].clevis != null)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
Fetching URI: '/apis/machine.openshift.io/v1beta1/machinesets?limit=500'

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

A jq filter may expect to iterate over a list of results, but it can
happen that no result is returned.
@yuumasato yuumasato force-pushed the handle-no-objects-to-iterate-on-filter branch from d12ed7c to 8042b4f Compare April 30, 2024 15:12
@@ -554,6 +557,11 @@ func filter(ctx context.Context, rawobj []byte, filter string) ([]byte, error) {
}
if err, ok := v.(error); ok {
DBG("Error while filtering: %s", err)
// gojq may return a diverse set of internal errors caused by null values.
// These errors are happen when a piped filter ends up acting on a null value.
if strings.HasSuffix(err.Error(), ": null") {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not fond the approach of checking the error string sufffix.
But gojq may return private error types and we cannot get more insight about the filter value error.
The returned error is not even gojq.HaltError, 😕
For this specific bug the error returned is gojq.iteratorError.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Examples of errors I have seen:

Fetching URI: '/api/v1/namespaces/-/pods?labelSelector=app%3Dkube-controller-manager'
debug: Applying filter '[[.items[0].spec.containers[0].args[] | select(. | match("--root-ca-file") )] | length | if . ==1 then true else false end]' to path '/api/v1/namespaces/-/pods?labelSelector=app%3Dkube-controller-manager'
debug: Error while filtering: cannot iterate over: null
debug: Error type:, [*gojq.iteratorError]
debug: Applying filter '[.items[0].spec.containers[0].command | join(" ")]' to path '/api/v1/namespaces/-/pods?labelSelector=app%3Detcd'
debug: Error while filtering: join(" ") cannot be applied to: null
debug: Error type:, [*gojq.func1TypeError]
debug: Persisting warnings to output file
FATAL:Error fetching resources: couldn't filter '{"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"288952"},"items":[]}': join(" ") cannot be applied to: null

@@ -0,0 +1,4 @@
{
"metadata": {},
"items": null
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is weird that on 4.16, hipershift, through CLI I get "items": [], while the API seems to get "items": null.

The cli gojq behaves consistent with the errors we are facing.

$ echo '{"metadata": {}, "items": []}' | gojq '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)'
[]
$ echo '{"metadata": {}, "items": null}' | gojq '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)'
gojq: cannot iterate over: null

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah - that seems like an API change in the machine config API. Maybe there was a alpha to beta, or beta to stable jump between those versions.

@BhargaviGudi
Copy link
Collaborator

BhargaviGudi commented May 2, 2024

@BhargaviGudi I have updated the patch.

Also, could you check if CO 1.4.0 works as expected on OCP4.16 too? Thanks a lot..

@yuumasato Issue can be reproducible on with 4.16.0-0.nightly-2024-05-01-111315 + compliance-operator.v1.4.0 on hypershift hosted cluster

$ oc get csv
NAME                         DISPLAY               VERSION   REPLACES   PHASE
compliance-operator.v1.4.0   Compliance Operator   1.4.0                Succeeded
$ oc compliance bind -N test profile/ocp4-pci-dss profile/ocp4-pci-dss-node
Creating ScanSettingBinding test
$ oc get pods
NAME                                             READY   STATUS                  RESTARTS      AGE
compliance-operator-df9b877bb-fj6df              1/1     Running                 0             7m42s
ocp4-openshift-compliance-pp-c9b54f7fc-tjt5r     1/1     Running                 0             7m33s
ocp4-pci-dss-api-checks-pod                      0/2     Init:CrashLoopBackOff   4 (30s ago)   2m24s
ocp4-pci-dss-rs-7c458655c-v26ll                  1/1     Running                 0             2m24s
rhcos4-openshift-compliance-pp-89dbf5867-ww5bz   1/1     Running                 0             7m33s

However, issue is not observed on normal cluster.

@BhargaviGudi
Copy link
Collaborator

BhargaviGudi commented May 2, 2024

Verification passed with 4.16.0-0.nightly-2024-05-01-111315 + compliance-operator from PR code #509 + with/without ComplianceAsCode/content#11906 code
Verification done on both normal cluster and hypershift hosted cluster

Create a ssb with upstream-ocp4-pci-dss and upstream-ocp4-pci-dss-node profile(from ComplianceAsCode/content#11906)

$ oc compliance bind -N test profile/upstream-ocp4-pci-dss profile/upstream-ocp4-pci-dss-node
Creating ScanSettingBinding test
$ oc get suite
NAME   PHASE   RESULT
test   DONE    NON-COMPLIANT
$ oc get scan
NAME                                PHASE   RESULT
upstream-ocp4-pci-dss               DONE    NON-COMPLIANT
upstream-ocp4-pci-dss-node-master   DONE    COMPLIANT
upstream-ocp4-pci-dss-node-worker   DONE    COMPLIANT
$ oc get pods
NAME                                                       READY   STATUS    RESTARTS      AGE
compliance-operator-6c47bf85f9-lxjfx                       1/1     Running   1 (59m ago)   59m
ocp4-openshift-compliance-pp-54fc68479-5nxtl               1/1     Running   0             59m
rhcos4-openshift-compliance-pp-b6df7b65c-cb6gs             1/1     Running   0             59m
upstream-ocp4-openshift-compliance-pp-58664766df-bghpv     1/1     Running   0             7m48s
upstream-rhcos4-openshift-compliance-pp-8576685445-xkft8   1/1     Running   0             7m46s
$ oc get ccr -l compliance.openshift.io/automated-remediation=,compliance.openshift.io/check-status=FAIL  
NAME                                                          STATUS   SEVERITY
upstream-ocp4-pci-dss-api-server-encryption-provider-cipher   FAIL     medium
upstream-ocp4-pci-dss-audit-profile-set                       FAIL     medium

Create ssb with ocp4-pci-dss and ocp4-pci-dss-node

$ oc compliance bind -N test profile/ocp4-pci-dss profile/ocp4-pci-dss-node
Creating ScanSettingBinding test
$ oc get suite
NAME   PHASE   RESULT
test   DONE    NON-COMPLIANT
[bgudi@bgudi-thinkpadt14sgen2i compliance-operator]$ oc get pods
NAME                                             READY   STATUS    RESTARTS        AGE
compliance-operator-6c47bf85f9-lxjfx             1/1     Running   1 (2m54s ago)   2m59s
ocp4-openshift-compliance-pp-54fc68479-5nxtl     1/1     Running   0               2m52s
rhcos4-openshift-compliance-pp-b6df7b65c-cb6gs   1/1     Running   0               2m52s
$ oc get ccr -l compliance.openshift.io/automated-remediation=,compliance.openshift.io/check-status=FAIL  
NAME                                                 STATUS   SEVERITY
ocp4-pci-dss-api-server-encryption-provider-cipher   FAIL     medium
ocp4-pci-dss-audit-profile-set                       FAIL     medium

@yuumasato
Copy link
Member Author

Thank you for testing @BhargaviGudi

For visibility I''m posting the warnings that are added to the scan:

$ oc get scan -oyaml upstream-ocp4-pci-dss
apiVersion: compliance.openshift.io/v1alpha1                            
kind: ComplianceScan                                                    
metadata:                              
...
  warnings: |-
    could not fetch /api/v1/namespaces/openshift-kube-controller-manager/configmaps/config: configmaps "config" not found
    could not fetch /api/v1/namespaces/openshift-kube-apiserver/configmaps/config: configmaps "config" not found
    could not fetch /api/v1/namespaces/openshift-kube-apiserver/configmaps/config: configmaps "config" not found
    could not fetch /api/v1/namespaces/openshift-kube-apiserver/configmaps/config: configmaps "config" not found
    couldn't filter '{
      "metadata": {},
      "items": null
    }': Skipping empty filter result from '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)': no value was returned from the filter
    couldn't filter '{
      "metadata": {},
      "items": null
    }': Skipping empty filter result from '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.config.storage.luks[0].clevis != null)': no value was returned from the filter
    could not fetch /apis/machine.openshift.io/v1beta1/machinesets?limit=500: the server could not find the requested resource

In OCP 4.16 HyperShift no MachineConfig is available / visible:

$ oc get --raw /api/v1/namespaces/openshift-kube-apiserver/configmaps/config              
Error from server (NotFound): configmaps "config" not found
$ oc get mc
No resources found

@rhmdnd
Copy link

rhmdnd commented May 2, 2024

/test e2e-aws-serial

@yuumasato
Copy link
Member Author

yuumasato commented May 2, 2024

For comparison, on 4.15 HyperShift:

CO debug logs:

Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Encountered non-fatal error to be persisted in the scan: failed to list MachineConfigs: failed to get API group resources: unable to retrieve the complete list of server APIs: machineconfiguration.openshift.io/v1: the server could not find the requested resource
oc get scan -oyaml upstream-ocp4-pci-dss
apiVersion: compliance.openshift.io/v1alpha1                            
kind: ComplianceScan                                                    
metadata:                              
...
  warnings: |-
    could not fetch /api/v1/namespaces/openshift-kube-apiserver/configmaps/config: configmaps "config" not found                                               
oc get --raw /api/v1/namespaces/openshift-kube-apiserver/configmaps/config 
Error from server (NotFound): configmaps "config" not found
oc get mc
error: the server doesn't have a resource type "mc"
oc create -f ~/openshift/co/objects/machineconfig.yaml 
error: resource mapping not found for name: "50-infra" namespace: "" from "/home/wsato/openshift/co/objects/machineconfig.yaml": no matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"
ensure CRDs are installed first

@yuumasato
Copy link
Member Author

yuumasato commented May 2, 2024

In conclusion, CO in OCP 4.16 HyperShift obtains a different response for URI /api/v1/namespaces/openshift-kube-apiserver/configmaps/config than it did on OCP 4.15 HyperShift.

While in 4.15 CO got a response that it failed to list the MachineConfigs, in 4.16 CO gets a list with no MachineConfigs.

@yuumasato
Copy link
Member Author

/test all

@rhmdnd
Copy link

rhmdnd commented May 3, 2024

/test e2e-aws-parallel

@@ -0,0 +1,4 @@
{
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super nit picky here, but we could elaborate on the name. To me, an empty machine config is items: []. But we could name it nil_machineconfig_list.json.

Also, this is certainly something I can bike shed later in a separate PR.

rawmc, readErr = io.ReadAll(nsFile)
Expect(readErr).To(BeNil())
})
It("skips filter piping errors", func() {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we could include the nil aspect into the test name here, so it's more specific about what case we're testing for.

It("gracefully handles nil item lists", func() {

Copy link

@rhmdnd rhmdnd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Copy link

openshift-ci bot commented May 3, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: rhmdnd, Vincent056, yuumasato

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@rhmdnd
Copy link

rhmdnd commented May 3, 2024

/jira refresh

@openshift-ci-robot
Copy link
Collaborator

@rhmdnd: This pull request references Jira Issue OCPBUGS-33067, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.16.0) matches configured target version for branch (4.16.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @xiaojiey

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Collaborator

@yuumasato: This pull request references Jira Issue OCPBUGS-33067, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.16.0) matches configured target version for branch (4.16.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @xiaojiey

In response to this:

A jq filter may expect to iterate over a list of results, but it can happen that no result is returned.
Let's not fatal error when this happens.

In an HyperShift environment, when no MC config exists, the following error occurs:

$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers
...
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Error while filtering: cannot iterate over: null
debug: Persisting warnings to output file
FATAL:Error fetching resources: couldn't filter '{
 "metadata": {},
 "items": null 
}': cannot iterate over: null

After creating a dummy MachineConfig, the URI fetching succeeds:

$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers
...
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.config.storage.luks[0].clevis != null)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
Fetching URI: '/apis/machine.openshift.io/v1beta1/machinesets?limit=500'

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@rhmdnd
Copy link

rhmdnd commented May 3, 2024

adding qe approved since @BhargaviGudi verified the change.

@GroceryBoyJr
Copy link
Collaborator

/label docs-approved

@openshift-merge-bot openshift-merge-bot bot merged commit e9ca64b into ComplianceAsCode:master May 3, 2024
13 checks passed
@openshift-ci-robot
Copy link
Collaborator

@yuumasato: Jira Issue OCPBUGS-33067: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-33067 has been moved to the MODIFIED state.

In response to this:

A jq filter may expect to iterate over a list of results, but it can happen that no result is returned.
Let's not fatal error when this happens.

In an HyperShift environment, when no MC config exists, the following error occurs:

$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers
...
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Error while filtering: cannot iterate over: null
debug: Persisting warnings to output file
FATAL:Error fetching resources: couldn't filter '{
 "metadata": {},
 "items": null 
}': cannot iterate over: null

After creating a dummy MachineConfig, the URI fetching succeeds:

$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers
...
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.config.storage.luks[0].clevis != null)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
Fetching URI: '/apis/machine.openshift.io/v1beta1/machinesets?limit=500'

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@yuumasato yuumasato deleted the handle-no-objects-to-iterate-on-filter branch May 6, 2024 06:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants