Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic: runtime error: invalid memory address or nil pointer dereference #11715

Closed
murphye opened this issue Jun 8, 2021 · 8 comments
Closed

Comments

@murphye
Copy link

murphye commented Jun 8, 2021

I built my own kops command-line tool so I could experiment with IPv6. I am using Version 1.22.0-alpha.1 (git-038d908078)

However, it seems that Kops is not able to install due to a runtime error.

journalctl -u kops-configuration.service

Jun 08 00:09:22 ip-172-20-33-37 nodeup[1467]: I0608 00:09:22.341303    1467 task.go:102] testing task "File"
Jun 08 00:09:22 ip-172-20-33-37 nodeup[1467]: panic: runtime error: invalid memory address or nil pointer dereference
Jun 08 00:09:22 ip-172-20-33-37 nodeup[1467]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x301c65b]
Jun 08 00:09:22 ip-172-20-33-37 nodeup[1467]: goroutine 1 [running]:
Jun 08 00:09:22 ip-172-20-33-37 nodeup[1467]: k8s.io/kops/nodeup/pkg/model.(*AWSEBSCSIDriverBuilder).Build(0xc00000e838, 0xc000883190, 0x0, 0x0)
Jun 08 00:09:22 ip-172-20-33-37 nodeup[1467]:         nodeup/pkg/model/awsebscsidriver.go:39 +0x5b
Jun 08 00:09:22 ip-172-20-33-37 nodeup[1467]: k8s.io/kops/upup/pkg/fi/nodeup.(*Loader).Build(0xc00072bd20, 0xc0004fbe00, 0x20, 0x20)
Jun 08 00:09:22 ip-172-20-33-37 nodeup[1467]:         upup/pkg/fi/nodeup/loader.go:39 +0x344
Jun 08 00:09:22 ip-172-20-33-37 nodeup[1467]: k8s.io/kops/upup/pkg/fi/nodeup.(*NodeUpCommand).Run(0xc00003c320, 0x4345d40, 0xc000130008, 0x0, 0x0)
Jun 08 00:09:22 ip-172-20-33-37 nodeup[1467]:         upup/pkg/fi/nodeup/command.go:314 +0x2545
Jun 08 00:09:22 ip-172-20-33-37 nodeup[1467]: main.main()
Jun 08 00:09:22 ip-172-20-33-37 nodeup[1467]:         cmd/nodeup/main.go:117 +0x1012
Jun 08 00:09:22 ip-172-20-33-37 systemd[1]: kops-configuration.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Jun 08 00:09:22 ip-172-20-33-37 systemd[1]: kops-configuration.service: Failed with result 'exit-code'.
Jun 08 00:09:22 ip-172-20-33-37 systemd[1]: Failed to start Run kOps bootstrap (nodeup).
Jun 08 00:09:32 ip-172-20-33-37 systemd[1]: Starting Run kOps bootstrap (nodeup)...
Jun 08 00:09:32 ip-172-20-33-37 nodeup[1489]: nodeup version 1.22.0-alpha.1
Jun 08 00:09:32 ip-172-20-33-37 nodeup[1489]: I0608 00:09:32.503882    1489 s3context.go:213] found bucket in region "us-west-2"
Jun 08 00:09:32 ip-172-20-33-37 nodeup[1489]: I0608 00:09:32.503957    1489 s3fs.go:290] Reading file "s3://k8s-gl00-net-state-store/k8s.gl00.net/cluster.spec"
Jun 08 00:09:32 ip-172-20-33-37 nodeup[1489]: I0608 00:09:32.540912    1489 s3fs.go:290] Reading file "s3://k8s-gl00-net-state-store/k8s.gl00.net/instancegroup/master-us-west-2a"
Jun 08 00:09:33 ip-172-20-33-37 nodeup[1489]: I0608 00:09:33.315416    1489 files.go:134] Hash matched for "/var/cache/nodeup/sha256:e77ff3ea404b2e69519ea4dce41cbdf11ae2bcba75a86d409a76e>
Jun 08 00:09:33 ip-172-20-33-37 nodeup[1489]: I0608 00:09:33.315493    1489 assetstore.go:270] added asset "kubelet" for &{"/var/cache/nodeup/sha256:e77ff3ea404b2e69519ea4dce41cbdf11ae2b>
Jun 08 00:09:33 ip-172-20-33-37 nodeup[1489]: I0608 00:09:33.459050    1489 files.go:134] Hash matched for "/var/cache/nodeup/sha256:58785190e2b4fc6891e01108e41f9ba5db26e04cebb7c1ac63991>
Jun 08 00:09:33 ip-172-20-33-37 nodeup[1489]: I0608 00:09:33.459103    1489 assetstore.go:270] added asset "kubectl" for &{"/var/cache/nodeup/sha256:58785190e2b4fc6891e01108e41f9ba5db26e>
Jun 08 00:09:33 ip-172-20-33-37 nodeup[1489]: I0608 00:09:33.578220    1489 files.go:134] Hash matched for "/var/cache/nodeup/sha256:977824932d5667c7a37aa6a3cbba40100a6873e7bd97e83e8be83>
Jun 08 00:09:33 ip-172-20-33-37 nodeup[1489]: I0608 00:09:33.578270    1489 assetstore.go:270] added asset "cni-plugins-linux-amd64-v0.8.7.tgz" for &{"/var/cache/nodeup/sha256:977824932d>
Jun 08 00:09:33 ip-172-20-33-37 nodeup[1489]: I0608 00:09:33.578373    1489 assetstore.go:380] added asset "bandwidth" for &{"/var/cache/nodeup/extracted/sha256:977824932d5667c7a37aa6a3c>
Jun 08 00:09:33 ip-172-20-33-37 nodeup[1489]: I0608 00:09:33.578388    1489 assetstore.go:380] added asset "bridge" for &{"/var/cache/nodeup/extracted/sha256:977824932d5667c7a37aa6a3cbba>
Jun 08 00:09:33 ip-172-20-33-37 nodeup[1489]: I0608 00:09:33.578401    1489 assetstore.go:380] added asset "dhcp" for &{"/var/cache/nodeup/extracted/sha256:977824932d5667c7a37aa6a3cbba40>
Jun 08 00:09:33 ip-172-20-33-37 nodeup[1489]: I0608 00:09:33.578417    1489 assetstore.go:380] added asset "firewall" for &{"/var/cache/nodeup/extracted/sha256:977824932d5667c7a37aa6a3cb>
Jun 08 00:09:33 ip-172-20-33-37 nodeup[1489]: I0608 00:09:33.578436    1489 assetstore.go:380] added asset "flannel" for &{"/var/cache/nodeup/extracted/sha256:977824932d5667c7a37aa6a3cbb>
Jun 08 00:09:33 ip-172-20-33-37 nodeup[1489]: I0608 00:09:33.578453    1489 assetstore.go:380] added asset "host-device" for &{"/var/cache/nodeup/extracted/sha256:977824932d5667c7a37aa6a>
log file: 3context.go:213] found bucket in region "us-west-2"

@murphye
Copy link
Author

murphye commented Jun 8, 2021

Here is my Cluster configuration:

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2021-06-08T00:01:30Z"
  generation: 2
  name: k8s.gl00.net
spec:
  api:
    dns: {}
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://k8s-gl00-net-state-store/k8s.gl00.net
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-west-2a
      name: a
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-west-2a
      name: a
    memoryRequest: 100Mi
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  - ::/0
  kubernetesVersion: 1.21.1
  masterInternalName: api.internal.k8s.gl00.net
  masterPublicName: api.k8s.gl00.net
  networkCIDR: 172.20.0.0/16
  networking:
    kubenet: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  - ::/0
  subnets:
  - cidr: 172.20.32.0/19
    ipv6CIDR: 2600:1f13:0b5a:7211::/64
    name: us-west-2a
    type: Public
    zone: us-west-2a
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

@hakman
Copy link
Member

hakman commented Jun 8, 2021

Just add this to your cluster spec and you should be good to go :)

  cloudConfig:
    awsEBSCSIDriver:
      enabled: false
      version: v1.0.0

@hakman
Copy link
Member

hakman commented Jun 8, 2021

Also, check #11688 (comment), may be useful.

@murphye
Copy link
Author

murphye commented Jun 8, 2021

Thanks, I will try this tomorrow and let you know how it works

@hakman
Copy link
Member

hakman commented Jun 8, 2021

Thanks, that would be cool.

@murphye
Copy link
Author

murphye commented Jun 8, 2021

I was able to get cluster with IPv6 installed with a caveat related to ipv6CIDR. I will open another issue for that.

Calico also installed, and I appreciate that reference from the comment. This is the config that worked for me (with exception to ipv6CIDR).

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: null
  name: k8s.gl00.net
spec:
  api:
    dns: {}
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  cloudConfig:
    awsEBSCSIDriver:
      enabled: false
      version: v1.0.0
  configBase: s3://k8s-gl00-net-state-store/k8s.gl00.net
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-west-2a
      name: a
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-west-2a
      name: a
    memoryRequest: 100Mi
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  - ::/0
  kubeAPIServer:
    bindAddress: "::"
  nonMasqueradeCIDR: fd00:10:96::/64
  serviceClusterIPRange: fd00:10:96::/108
  kubeControllerManager:
    clusterCIDR: fd00:10:96::/80
    allocateNodeCIDRs: false
  kubeDNS:
    provider: CoreDNS
    upstreamNameservers:
    - 2620:119:35::35
    - 2620:119:53::53
  kubernetesVersion: 1.21.1
  masterPublicName: api.k8s.gl00.net
  networkCIDR: 172.20.0.0/16
  networking:
    calico:
      ipv4Support: false
      ipv6Support: true
  sshAccess:
  - 0.0.0.0/0
  - ::/0
  subnets:
  - cidr: 172.20.32.0/19
    ipv6CIDR: 2600:1f13:a4b:2200::/56
    name: us-west-2a
    type: Public
    zone: us-west-2a
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: k8s.gl00.net
  name: master-us-west-2a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210415
  machineType: t3.xlarge
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-us-west-2a
  role: Master
  subnets:
  - us-west-2a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: k8s.gl00.net
  name: nodes-us-west-2a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210415
  machineType: t3.xlarge
  maxSize: 2
  minSize: 2
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-us-west-2a
  role: Node
  subnets:
  - us-west-2a

@murphye
Copy link
Author

murphye commented Jun 8, 2021

Closing this. Needed to add:

  cloudConfig:
    awsEBSCSIDriver:
      enabled: false
      version: v1.0.0

@murphye murphye closed this as completed Jun 8, 2021
@hakman
Copy link
Member

hakman commented Jun 9, 2021

Cool. Would you mind explaining a little your use case for IPv6 and the if you managed to make it work as you wanted?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants