Skip to content
This repository has been archived by the owner on Nov 1, 2022. It is now read-only.

panic on nil access on getting statefulset status #1062

Closed
cehoffman opened this issue May 1, 2018 · 1 comment
Closed

panic on nil access on getting statefulset status #1062

cehoffman opened this issue May 1, 2018 · 1 comment

Comments

@cehoffman
Copy link

When a StatefulSet is unable to initialize an initial deployment because of some reason like resource quota it never gets an observed generation. The check at resourceKinds.go:212 gets a nil dereference.

ts=2018-05-01T01:20:23.152400197Z caller=main.go:422 addr=:3030
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xf85f1a]

goroutine 82 [running]:
github.com/weaveworks/flux/cluster/kubernetes.makeStatefulSetPodController(0xc421549900, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /go/src/github.com/weaveworks/flux/cluster/kubernetes/resourcekinds.go:212 +0xea
github.com/weaveworks/flux/cluster/kubernetes.(*statefulSetKind).getPodControllers(0x1b6fbb0, 0xc420226960, 0xc4201670a0, 0x14, 0x1, 0x0, 0x0, 0x0, 0x0)
        /go/src/github.com/weaveworks/flux/cluster/kubernetes/resourcekinds.go:203 +0x13d
github.com/weaveworks/flux/cluster/kubernetes.(*Cluster).ImagesToFetch(0xc420226960, 0xc42004e380)
        /go/src/github.com/weaveworks/flux/cluster/kubernetes/kubernetes.go:390 +0x248
github.com/weaveworks/flux/cluster/kubernetes.(*Cluster).ImagesToFetch-fm(0xdf8475800)
        /go/src/github.com/weaveworks/flux/cmd/fluxd/main.go:242 +0x2a
github.com/weaveworks/flux/registry/cache.(*Warmer).Loop(0xc4203a9f00, 0x1ad49e0, 0xc4203fe0c0, 0xc42012ee40, 0xc4203cc590, 0xc4203cc110)
        /go/src/github.com/weaveworks/flux/registry/cache/warming.go:55 +0x89
created by main.main
        /go/src/github.com/weaveworks/flux/cmd/fluxd/main.go:415 +0x423e

The statefulset in question that is crashing flux looks like has a status section like:

    "status": {
      "replicas": 0
    }

It is currently stuck since the user has used all of their current quota.

@squaremo
Copy link
Member

squaremo commented May 4, 2018

Thanks for reporting this ⭐ -- I've merged a fix, which should appear in the next release (as well as the CI builds).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants