New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Panic in Cloud CIDR Allocator #58181

Closed
negz opened this Issue Jan 12, 2018 · 2 comments

Comments

Projects
None yet
2 participants
@negz
Contributor

negz commented Jan 12, 2018

/kind bug

What happened:
I'm running Kubernetes on GCE (not GKE). I deploy the API server, scheduler, and controller manager to CoreOS 'master' nodes. Note that the etcd cluster runs elsewhere. I'm using GCE's Alias IP Ranges, i.e. I'm running the controller manager with:

--allocate-node-cidrs=true
--cidr-allocator-type=cloudallocator
--configure-cloud-routes=false

Upon a rolling update of the aforementioned 'master' nodes the controller manager entered crash loop backoff. It seems to be panicing in the cloud CIDR allocator code:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x2f695d1]
goroutine 1318 [running]:
k8s.io/kubernetes/pkg/controller/node/util.RecordNodeStatusChange(0xa6635c0, 0xc4209ee1c0, 0x0, 0x497b23c, 0x10)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/node/util/controller_utils.go:205 +0x71
k8s.io/kubernetes/pkg/controller/node/ipam.(*cloudCIDRAllocator).updateCIDRAllocation(0xc420ae9f20, 0xc42124d110, 0x30, 0x0, 0x0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/node/ipam/cloud_cidr_allocator.go:200 +0xea0
k8s.io/kubernetes/pkg/controller/node/ipam.(*cloudCIDRAllocator).worker(0xc420ae9f20, 0xc42006c720)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/node/ipam/cloud_cidr_allocator.go:149 +0x139
created by k8s.io/kubernetes/pkg/controller/node/ipam.(*cloudCIDRAllocator).Run
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/node/ipam/cloud_cidr_allocator.go:135 +0x174

What you expected to happen:
The controller manager to allocate Alias IP ranges without panicing.

How to reproduce it (as minimally and precisely as possible):
Still working on this part. We've been running with this setup for some time and this is the first time we've seen it happen.

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T21:07:53Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
$ gcloud compute instances describe REDACTED                                                                                    
No zone specified. Using zone [us-central1-f] for instance: [REDACTED].                                                                              
canIpForward: true                                                                                                                                             
cpuPlatform: Intel Ivy Bridge                                                                                                                                  
creationTimestamp: '2018-01-11T13:17:50.251-08:00'                                                                                                             
deletionProtection: false                                                                                                                                      
description: REDACTED                                                                                                  
disks:                                                                                                                                                         
- autoDelete: true                                                                                                                                             
  boot: true                                                                                                                                                   
  deviceName: persistent-disk-0                                                                                                                                
  index: 0                                                                                                                                                     
  interface: SCSI                                                                                                                                              
  kind: compute#attachedDisk                                                                                                                                   
  licenses:                                                                                                                                                    
  - https://www.googleapis.com/compute/v1/projects/coreos-cloud/global/licenses/coreos-stable                                                                  
  mode: READ_WRITE                                                                                                                                             
  source: https://www.googleapis.com/compute/v1/projects/REDACTED/zones/us-central1-f/disks/REDACTED                                       
  type: PERSISTENT                                                                                                                                             
- autoDelete: true                                                                                                                                             
  boot: false                                                                                                                                                  
  deviceName: varlibdocker                                                                                                                                     
  index: 1                                                                                                                                                     
  interface: SCSI                                                                                                                                              
  kind: compute#attachedDisk                                                                                                                                   
  mode: READ_WRITE                                                                                                                                             
  type: SCRATCH                                                                                                                                                
id: '1290159941171702418'                                                                                                                                      
kind: compute#instance                                                                                                                                         
labelFingerprint: DIYgGjSJI6A=                                                                                                                                 
labels:                                                                                                                                                        
  cluster: REDACTED                                                                                                                                            
  component: kubernetes                                                                                                                                        
machineType: https://www.googleapis.com/compute/v1/projects/REDACTED/zones/us-central1-f/machineTypes/n1-standard-2                                  
metadata: 
name: REDACTED
networkInterfaces:
- accessConfigs:
  - kind: compute#accessConfig
    name: external-nat
    natIP: REDACTED
    type: ONE_TO_ONE_NAT
  aliasIpRanges:
  - ipCidrRange: REDACTED/24
    subnetworkRangeName: REDACTED
  kind: compute#networkInterface
  name: nic0
  network: https://www.googleapis.com/compute/v1/projects/REDACTED-XPN-HOST/global/networks/REDACTED
  networkIP: REDACTED
  subnetwork: https://www.googleapis.com/compute/v1/projects/REDACTED-XPN-HOST/regions/us-central1/subnetworks/REDACTED
scheduling:
  automaticRestart: true
  onHostMaintenance: MIGRATE
  preemptible: false
selfLink: https://www.googleapis.com/compute/v1/projects/REDACTED/zones/us-central1-f/instances/REDACTED
serviceAccounts:
- email: REDACTED@REDACTED.iam.gserviceaccount.com
  scopes:
  - https://www.googleapis.com/auth/cloud-platform
startRestricted: false
status: RUNNING
zone: https://www.googleapis.com/compute/v1/projects/REDACTED/zones/us-central1-f
  • OS (e.g. from /etc/os-release):
$ cat /etc/os-release 
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1576.5.0
VERSION_ID=1576.5.0
BUILD_ID=2018-01-05-1121
PRETTY_NAME="Container Linux by CoreOS 1576.5.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
  • Kernel (e.g. uname -a):
Linux REDACTED.internal 4.14.11-coreos #1 SMP Fri Jan 5 11:00:14 UTC 2018 x86_64 Intel(R) Xeon(R) CPU @ 2.50GHz GenuineIntel GNU/Linux
  • Install tools:
    Bespoke Terraform setup.

  • Other:
    Full controller manager args:

      /hyperkube
      controller-manager
      --kubeconfig=/etc/kubernetes/kubeconfig
      --root-ca-file=/etc/kubernetes/tls-ca.crt
      --service-account-private-key-file=/etc/kubernetes/tls-apiserver.key
      --cluster-signing-cert-file=/etc/kubernetes/tls-ca.crt
      --cluster-signing-key-file=/etc/kubernetes/tls-ca.key
      --cloud-provider=gce
      --cloud-config=/etc/kubernetes/gce.conf
      --allocate-node-cidrs=true
      --cidr-allocator-type=CloudAllocator
      --configure-cloud-routes=false
      --cluster-cidr=172.16.0.0/12
      --service-cluster-ip-range=192.168.0.0/16
      --cluster-name=REDACTED

And gce.conf:

[global]
    multizone = true
    network-project-id = REDACTED-XPN-HOST
    network-name = REDACTED-XPN-HOST
    subnetwork-name = REDACTED-XPN-HOST-SUBNET
@negz

This comment has been minimized.

Show comment
Hide comment
@negz

negz Jan 12, 2018

Contributor

/sig gcp

Contributor

negz commented Jan 12, 2018

/sig gcp

@k8s-ci-robot k8s-ci-robot added sig/gcp and removed needs-sig labels Jan 12, 2018

@negz

This comment has been minimized.

Show comment
Hide comment
@negz
Contributor

negz commented Jan 12, 2018

k8s-merge-robot added a commit that referenced this issue Jan 13, 2018

Merge pull request #58186 from negz/master
Automatic merge from submit-queue (batch tested with PRs 57266, 58187, 58186, 46245, 56509). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Avoid panic in Cloud CIDR Allocator

**What this PR does / why we need it**:
I suspect a race exists where we attempt to look up the CIDR for a terminating node. By the time `updateCIDRAllocation` is called the node has disappeared. We determine it does not have a cloud CIDR (i.e. Alias IP Range) and attempt to record a `CIDRNotAvailable` node status. Unfortunately we reference `node.Name` while `node` is still nil.

By getting the node before looking up the cloud CIDR we avoid the nil pointer dereference, and potentially fail fast in the case the node has disappeared.

**Which issue(s) this PR fixes**:
Fixes #58181

**Release note**:

```release-note
Avoid panic when failing to allocate a Cloud CIDR (aka GCE Alias IP Range). 
```

k8s-merge-robot added a commit that referenced this issue Jan 23, 2018

Merge pull request #58557 from jingax10/release-1.9
Automatic merge from submit-queue.

Initialize node ahead in case we need to refer to it in error cases

Initialize node ahead in case we need to refer to it in error cases. This is a backport of #58186. We cannot intact backport to it due to a refactor PR #56352.



**What this PR does / why we need it**:

We want to cherry pick to 1.9. Master already has the fix.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #58181

**Special notes for your reviewer**:

**Release note**:

```release-note
Avoid controller-manager to crash when enabling IP alias for K8s cluster.
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment