Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plugins getting cleaned up too early? #1657

Closed
johnSchnake opened this issue Mar 29, 2022 · 8 comments
Closed

Plugins getting cleaned up too early? #1657

johnSchnake opened this issue Mar 29, 2022 · 8 comments
Labels
kind/bug Behavior isn't as expected or intended lifecycle/triage Needs reproduction/decisions before advancing p1-important

Comments

@johnSchnake
Copy link
Contributor

What steps did you take and what happened:
Run sonobuoy run -m quick --wait && sonobuoy retrieve -x tmpout && find tmpout/podlogs/sonobuoy. Notice that there are no podlogs for the e2e plugin.

What did you expect to happen:
See the pod logs for the e2e plugin. We're supposed to be waiting to clean up the pods until after cleanup occurs. I thought it was the case that the pod stuck around in a completed state even though both containers were done.

Anything else you would like to add:
This seems to be a regression; I'd have to find the old ticket but this has come up before a long time ago.

@johnSchnake johnSchnake added the kind/bug Behavior isn't as expected or intended label Mar 29, 2022
@johnSchnake
Copy link
Contributor Author

Checked and this bug does not occur with v0.50.0; havent checked other versions yet though.

Notable difference is that on v0.50.0 the pod reads as 'NotReady 1/2' where the e2e container completed by the sonobuoy worker is still running.

@johnSchnake
Copy link
Contributor Author

johnSchnake commented Mar 29, 2022

Works with v0.54 but not v0.55. The issue I was recalling was #1415 which may or may not be related. It was a bug with a specific piece of code that I doubt regressed.

Mistakenly thought v0.55 had the problem but it didnt. Still investigating.

@johnSchnake
Copy link
Contributor Author

So now I can't reproduce at all. 👎 I know this was happening because it was happening while I was trying to debug a different, new plugin and I really wanted to see those logs. Keeping this open in case others have similar reports.

@johnSchnake johnSchnake added the lifecycle/triage Needs reproduction/decisions before advancing label Mar 31, 2022
@mtulio
Copy link

mtulio commented Apr 28, 2022

I am able to reproduce it. See the steps:

  1. Run the default e2e execution
 $ sonobuoy run --mode=certified-conformance --dns-namespace=openshift-dns \
>         --dns-pod-labels=dns.operator.openshift.io/daemonset-dns=default
INFO[0002] create request issued                         name=sonobuoy namespace= resource=namespaces
INFO[0002] create request issued                         name=sonobuoy-serviceaccount namespace=sonobuoy resource=serviceaccounts
INFO[0002] create request issued                         name=sonobuoy-serviceaccount-sonobuoy namespace= resource=clusterrolebindings
INFO[0002] create request issued                         name=sonobuoy-serviceaccount-sonobuoy namespace= resource=clusterroles
INFO[0002] create request issued                         name=sonobuoy-config-cm namespace=sonobuoy resource=configmaps
INFO[0003] create request issued                         name=sonobuoy-plugins-cm namespace=sonobuoy resource=configmaps
INFO[0003] create request issued                         name=sonobuoy namespace=sonobuoy resource=pods
INFO[0003] create request issued                         name=sonobuoy-aggregator namespace=sonobuoy resource=services
  1. Collect the results
$ sonobuoy retrieve
202204281546_sonobuoy_9b51e5d8-47a1-4c52-8ed8-b1772922756a.tar.gz
  1. Check the tarball and the logs of plugins are not present on path ./podlogs
$ tar xvfz 202204281546_sonobuoy_9b51e5d8-47a1-4c52-8ed8-b1772922756a.tar.gz |grep podlogs
podlogs
podlogs/sonobuoy
podlogs/sonobuoy/sonobuoy
podlogs/sonobuoy/sonobuoy/logs
podlogs/sonobuoy/sonobuoy/logs/kube-sonobuoy.txt

What did you expect to happen:

The logs of plugins are present inside ./podlogs directory.

Anything else you would like to add:

I will try to reproduce it in custom plugins and share feedback.

Environment:

  • Sonobuoy version: v0.56.0
  • Kubernetes version: (use kubectl version): v1.23.5+9ce5071
  • Kubernetes installer & version: OpenShift v4.10
  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release):
NAME="Red Hat Enterprise Linux"
VERSION="8.4 (Ootpa)"

@mtulio
Copy link

mtulio commented Apr 29, 2022

I just tried with custom plugins and also did not work, all the logs from the plugin's pods were not included on the tarball.

@johnSchnake
Copy link
Contributor Author

5577b2d confirmed as the point at which this bug was introduced. When refactoring the query logic it accidentally got bumped to after the cleanup routine.

I want to do a release today and would love to include a patch for this in it.

@johnSchnake
Copy link
Contributor Author

Came down sick on Friday; fixing this up today and getting a release out though.

@mtulio
Copy link

mtulio commented May 11, 2022

@johnSchnake version 0.56.5 tested, now I can see the plugin's logs on the tarball. Thanks!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Behavior isn't as expected or intended lifecycle/triage Needs reproduction/decisions before advancing p1-important
Projects
None yet
Development

No branches or pull requests

2 participants