Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sonobuoy Scanner displays empty report #84

Closed
charlesakalugwu opened this issue Sep 29, 2017 · 6 comments
Closed

Sonobuoy Scanner displays empty report #84

charlesakalugwu opened this issue Sep 29, 2017 · 6 comments
Assignees

Comments

@charlesakalugwu
Copy link

Hi!

I tried out the sonobuoy scanner tool on a local vagrant setup of a Kubernetes 1.8.0-rc.1 cluster. The tests finished after about 50 minutes but at the end, the UI showed an empty report.

Could someone have a look?

https://scanner.heptio.com/1edc4542a2ebd2aecce43730f51fecb6/diagnostics/

@charlesakalugwu charlesakalugwu changed the title Heptio Scanner displays empty report Sonobuoy Scanner displays empty report Sep 29, 2017
@timothysc
Copy link
Member

@charleslieferando Thank you for the feedback!

We're aware of the issue and are working on a fix ASAP. The common issue we have seen is a condition when a test blocks or times out, which then cascades to cause a sonobuoy timeout. There are details in the artifacts but we are working on surfacing this up to users.

@timothysc timothysc added this to the v0.9.0 milestone Sep 29, 2017
@timothysc timothysc self-assigned this Sep 29, 2017
@timothysc
Copy link
Member

@charleslieferando @chuckha has updated the landing page if there are issues. Please let us know if you have issues.

@timothysc timothysc removed their assignment Sep 29, 2017
@charlesakalugwu
Copy link
Author

it works! thanks

@charlesakalugwu
Copy link
Author

charlesakalugwu commented Nov 21, 2017

I just moved my local vagrant setup to AWS and running on Kubernetes 1.8.3. I ran the tests but saw a similar issue...empty results at https://scanner.heptio.com/79d9360dbd18f0f6f24f47cfd6884113/diagnostics/

This time though, I see the server pod from the e2e-tests-prestop-t7l4p namespace stayed running since the test started (5 hours ago).

ubuntu@k8s-master-10-59-16-94:~$ kubectl get pods --all-namespaces 

NAMESPACE                 NAME                                   READY     STATUS    RESTARTS   AGE
default                   prometheus-operator-94cc5d5c-7fhrl     1/1       Running   0          10h
default                   wrk-59c4546c4f-btvbm                   2/2       Running   0          9h
e2e-tests-prestop-t7l4p   server                                 1/1       Running   0          5h
istio-system              grafana-8667b7fbfb-6cxjt               1/1       Running   0          9h

Here are the logs from that pod's container

ubuntu@k8s-master-10-59-16-94:~$ kubectl logs --namespace=e2e-tests-prestop-t7l4p server

2017/11/21 14:56:37 Server version: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:27:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Could someone have a look again? @timothysc @chuckha

@chuckha
Copy link
Contributor

chuckha commented Nov 21, 2017

This to me indicates the e2e tests did not clean up properly. I wonder if they crashed for some reason in the middle of a run?

Is there any chance you captured the sonobuoy pod logs?

@charlesakalugwu
Copy link
Author

Hi. Unfortunately I did not capture the sonobuoy pod logs. I'll rerun the tests and see if the issue occurs again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants