Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

100% CPU usage after closing pod log terminal #3033

Closed
rade opened this issue Jan 18, 2018 · 1 comment
Closed

100% CPU usage after closing pod log terminal #3033

rade opened this issue Jan 18, 2018 · 1 comment
Assignees
Labels
bug Broken end user or developer functionality; not working as the developers intended it k8s Pertains to integration with Kubernetes

Comments

@rade
Copy link
Member

rade commented Jan 18, 2018

After closing a pod log terminal, the CPU usage of the probe on the host on which the pod resides jumps by 100% and stays there - looks like it's busy-looping on one core.

I've observed this in several environments, using Chrome and FF.

Container logs don't suffer from this.

Popping out the pod log terminal and then closing it does not trigger the problem.

All the probes I've tested with were 1.7.0. I also checked a cluster with 1.6.7 and the problem does not occur there.

I suspect #3013 may have introduced this.

@rade rade added the bug Broken end user or developer functionality; not working as the developers intended it label Jan 18, 2018
@rade
Copy link
Member Author

rade commented Jan 18, 2018

I grabbed a go-routine dump from one of the spinning probes and found

goroutine 23452 [runnable]:
github.com/weaveworks/scope/probe/kubernetes.(*logReadCloser).Read(0xc4216ea8c0, 0xc421091000, 0x400, 0x400, 0x0, 0xc42105a600, 0xc421ffbd10)
        /go/src/github.com/weaveworks/scope/probe/kubernetes/logreadcloser.go:88 +0x365
go.(*struct { io.Reader; io.Writer }).Read(0xc4213415a0, 0xc421091000, 0x400, 0x400, 0x400, 0x0, 0x0)
        <autogenerated>:1 +0x5a
github.com/weaveworks/scope/common/xfer.(*pipe).CopyToWebsocket.func2(0x2f22780, 0xc4213415a0, 0xc421b41260, 0xc421275d50, 0x2f31880, 0xc421ffbd00)
        /go/src/github.com/weaveworks/scope/common/xfer/pipes.go:141 +0x9e
created by github.com/weaveworks/scope/common/xfer.(*pipe).CopyToWebsocket
        /go/src/github.com/weaveworks/scope/common/xfer/pipes.go:138 +0x1a8

Here's a theory: The channels in the select were closed by logReadCloser.Close and hence reading from them always succeeds.

@rade rade added the k8s Pertains to integration with Kubernetes label Jan 19, 2018
@rbruggem rbruggem self-assigned this Jan 19, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Broken end user or developer functionality; not working as the developers intended it k8s Pertains to integration with Kubernetes
Projects
None yet
Development

No branches or pull requests

2 participants