Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No PolicyReport CRDs found #66

Closed
windowsrefund opened this issue Sep 7, 2021 · 12 comments · Fixed by #69
Closed

No PolicyReport CRDs found #66

windowsrefund opened this issue Sep 7, 2021 · 12 comments · Fixed by #69

Comments

@windowsrefund
Copy link
Contributor

I am running 1.8.9 and see the following log entries when starting my policy-reporter pod. Is the ERROR legit?

2021/09/07 16:00:39 [INFO] UI configured
2021/09/07 16:00:52 [ERROR] No PolicyReport CRDs found
2021/09/07 16:01:09 [INFO] Resource Found: wgpolicyk8s.io/v1alpha1, Resource=clusterpolicyreports
2021/09/07 16:01:09 [INFO] Resource Found: wgpolicyk8s.io/v1alpha2, Resource=policyreports

The following CRDs exist on the system since this cluster is running Kyverno 1.4.2

clusterpolicies.kyverno.io                    2021-09-02T15:13:05Z
clusterreportchangerequests.kyverno.io        2021-09-02T15:13:05Z
generaterequests.kyverno.io                   2021-09-02T15:13:05Z
policies.kyverno.io                           2021-09-02T15:13:05Z
reportchangerequests.kyverno.io               2021-09-02T15:13:05Z
@fjogeleit
Copy link
Member

Policy Reporter uses policyreports.wgpolicyk8s.io and clusterpolicyreports.wgpolicyk8s.io. If they are not installed you get this error but then it start watching every 5 seconds again for the CRDs. SO at the start it does not find this CRDs. After 10 seconds it founds both and should be start working as expected.

The only thing I am wondering about is that it found to different CRD Version. The current stable Kyverno Release should use v1alpha1

@windowsrefund
Copy link
Contributor Author

Is there some additional information I can provide in order to better understand the potential issue?

> k get crd | grep policyreports
clusterpolicyreports.wgpolicyk8s.io           2021-01-28T21:12:32Z
policyreports.wgpolicyk8s.io                  2021-01-28T21:12:32Z

@fjogeleit
Copy link
Member

Is there an issue? Do you don't get information from Policy Reporter? If its only the log entry you can ignore them because they were found a few seconds later.

@windowsrefund
Copy link
Contributor Author

On some clusters, I am not seeing the data via the policy-reporter UI. I just wanted to get clarity on this error (now understood to be somewhat of a false positive). That said, it's very possible the work I'm currently doing with Network Policies is at the root of the problem. It might be good to add specifics about what ingress/egress traffic is needed to the project's README?

@windowsrefund
Copy link
Contributor Author

As we've been discussing this, I figured I'd reopen and append what we've found:

> kubectl get crd policyreports.wgpolicyk8s.io -o jsonpath='{.status.storedVersions}'
[v1alpha1]%                                                                                                                                                 
> kubectl get crd clusterpolicyreports.wgpolicyk8s.io -o jsonpath='{.status.storedVersions}'
[v1alpha1]%                                                                                                                                                 

@windowsrefund windowsrefund reopened this Sep 8, 2021
@fjogeleit fjogeleit linked a pull request Sep 8, 2021 that will close this issue
@fjogeleit
Copy link
Member

I could reproduce this error with an deny network policy. Because policy reporter uses the Kubernetes API client, the policy reporter network policy has to allow egress traffic to the API Server (Port 6443).

I updated the network policy and released it with your new features with 1.9.0.

@windowsrefund
Copy link
Contributor Author

Thank you. Testing now...

@windowsrefund
Copy link
Contributor Author

I've deployed chart 1.9.0 to 2 v1.19.9 clusters.

> helm ls
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
policy-reporter policy-reporter 5               2021-09-09 14:44:12.73445032 +0000 UTC  deployed        policy-reporter-1.9.0   1.8.5

One is working, the other is not. What's interesting are the differences I'm seeing in the logs for the policy-reporter pod. Here's the few few lines from the working pod:

2021/09/09 14:44:19 [INFO] UI configured
2021/09/09 14:44:19 [INFO] Unable to sync Priorities: unknown (get configmaps)
2021/09/09 14:44:19 [INFO] Resource Found: wgpolicyk8s.io/v1alpha1, Resource=clusterpolicyreports
2021/09/09 14:44:19 [INFO] Resource Found: wgpolicyk8s.io/v1alpha1, Resource=policyreports
2021/09/09 14:44:20 [INFO] UI PUSH OK
2021/09/09 14:44:20 [INFO] UI PUSH OK
2021/09/09 14:44:20 [INFO] UI PUSH OK

And now the problem child:

2021/09/09 14:47:42 [INFO] UI configured
2021/09/09 14:48:09 [ERROR] No PolicyReport CRDs found
2021/09/09 14:48:12 [INFO] Resource Found: wgpolicyk8s.io/v1alpha2, Resource=policyreports
2021/09/09 14:48:12 [INFO] Resource Found: wgpolicyk8s.io/v1alpha1, Resource=clusterpolicyreports
2021/09/09 14:48:12 [INFO] Resource Found: wgpolicyk8s.io/v1alpha1, Resource=policyreports
2021/09/09 14:48:12 [INFO] Resource Found: wgpolicyk8s.io/v1alpha2, Resource=clusterpolicyreports

I've verified each of the 3 netpols are consistent on both clusters. I've also seen the same query results on each.

> kubectl get crd policyreports.wgpolicyk8s.io -o jsonpath='{.status.storedVersions}'
[v1alpha1]%                                                                                                                                                 
> kubectl get crd clusterpolicyreports.wgpolicyk8s.io -o jsonpath='{.status.storedVersions}'
[v1alpha1]%                                                                                                                                                 

@fjogeleit
Copy link
Member

Is it possible that your Kubernetes API Server has a different port as 6443 or other/additional restrictions? I think the problem is still that the Kubernetes API Client can't connect.

@fjogeleit
Copy link
Member

fjogeleit commented Sep 9, 2021

Release 1.9.1 has a new value networkPolicy.kubernetesApiPort with 6443 as default. You can change it to your needs.

@windowsrefund
Copy link
Contributor Author

Thank you for all the collaboration. Closing this as 1.9.2 is meeting all expectations on my end.

@fjogeleit
Copy link
Member

Thank you for your contributions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants