Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kops edit ig can not save changes #12543

Closed
jivco opened this issue Oct 16, 2021 · 2 comments · Fixed by #12545
Closed

kops edit ig can not save changes #12543

jivco opened this issue Oct 16, 2021 · 2 comments · Fixed by #12545
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@jivco
Copy link

jivco commented Oct 16, 2021

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

1.22.0

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

v1.22.2

3. What cloud provider are you using?
AWS

4. What commands did you run? What is the simplest way to reproduce this issue?
kops edit ig nodes

5. What happened after the commands executed?
Can not save the changes and crashed

6. What did you expect to happen?
Not to crash :)

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.


apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2021-10-16T12:57:45Z"
  generation: 1
  name: cluster.k8s.local
spec:
  api:
    loadBalancer:
      class: Classic
      type: Public
  assets:
    containerRegistry: registry.google.com
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://my-bucket/cluster.k8s.local
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-east-2a
      name: a
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-east-2a
      name: a
    memoryRequest: 100Mi
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 8.8.8.8/32
  kubernetesVersion: 1.22.2
  masterInternalName: api.internal.cluster.k8s.local
  masterPublicName: api.cluster.k8s.local
  networkCIDR: 172.27.0.0/16
  networking:
    calico: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 8.8.8.8/32
  subnets:
  - cidr: 172.27.32.0/19
    name: us-east-2a
    type: Private
    zone: us-east-2a
  - cidr: 172.27.0.0/22
    name: utility-us-east-2a
    type: Utility
    zone: us-east-2a
  topology:
    dns:
      type: Private
    masters: private
    nodes: private

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-10-16T12:57:49Z"
  labels:
    kops.k8s.io/cluster: cluster.k8s.local
  name: bastions
spec:
  associatePublicIp: false
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20211001
  instanceMetadata:
    httpPutResponseHopLimit: 1
    httpTokens: required
  machineType: t3.micro
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: bastions
  role: Bastion
  subnets:
  - us-east-2a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-10-16T12:57:48Z"
  labels:
    kops.k8s.io/cluster: cluster.k8s.local
  name: master-us-east-2a
spec:
  associatePublicIp: false
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20211001
  instanceMetadata:
    httpPutResponseHopLimit: 3
    httpTokens: required
  machineType: c5.xlarge
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-us-east-2a
  role: Master
  subnets:
  - us-east-2a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-10-16T12:57:48Z"
  labels:
    kops.k8s.io/cluster: cluster.k8s.local
  name: nodes-us-east-2a
spec:
  associatePublicIp: false
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20211001
  instanceMetadata:
    httpPutResponseHopLimit: 1
    httpTokens: required
  machineType: m6i.large
  maxPrice: "0.03"
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-us-east-2a
  role: Node
  rootVolumeSize: 10
  subnets:
  - us-east-2a

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

kops edit ig master-us-east-2a -v 10
I1016 16:40:52.417340    8859 loader.go:372] Config loaded from file:  /home/user/.kube/config
Using cluster from kubectl context: cluster.k8s.local

I1016 16:40:52.417431    8859 factory.go:68] state store s3://my-bucket
I1016 16:40:52.417475    8859 s3context.go:334] unable to read /sys/devices/virtual/dmi/id/product_uuid, assuming not running on EC2: open /sys/devices/virtual/dmi/id/product_uuid: permission denied
I1016 16:40:53.944985    8859 s3context.go:166] unable to get region from metadata:unable to get region from metadata: EC2MetadataRequestError: failed to get EC2 instance identity document
caused by: RequestError: send request failed
caused by: Get "http://169.254.169.254/latest/dynamic/instance-identity/document": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1016 16:40:53.945030    8859 s3context.go:176] defaulting region to "us-east-1"
I1016 16:40:54.497627    8859 s3context.go:216] found bucket in region "us-east-2"
I1016 16:40:54.498204    8859 s3fs.go:327] Reading file "s3://my-bucket/cluster.k8s.local/config"
I1016 16:40:55.113394    8859 channel.go:106] resolving "stable" against default channel location "https://raw.githubusercontent.com/kubernetes/kops/master/channels/"
I1016 16:40:55.113425    8859 channel.go:126] Loading channel from "https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable"
I1016 16:40:55.113439    8859 context.go:216] Performing HTTP request: GET https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable
I1016 16:40:55.389779    8859 channel.go:135] Channel contents: spec:
  images:
    # We put the "legacy" version first, for kops versions that don't support versions ( < 1.5.0 )
    - name: kope.io/k8s-1.4-debian-jessie-amd64-hvm-ebs-2017-07-28
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.4.0 <1.5.0"
    - name: kope.io/k8s-1.5-debian-jessie-amd64-hvm-ebs-2018-08-17
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.5.0 <1.6.0"
    - name: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2018-08-17
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.6.0 <1.7.0"
    - name: kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2018-08-17
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.7.0 <1.8.0"
    - name: kope.io/k8s-1.8-debian-stretch-amd64-hvm-ebs-2018-08-17
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.8.0 <1.9.0"
    - name: kope.io/k8s-1.9-debian-stretch-amd64-hvm-ebs-2018-08-17
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.9.0 <1.10.0"
    - name: kope.io/k8s-1.10-debian-stretch-amd64-hvm-ebs-2018-08-17
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.10.0 <1.11.0"
    # Stretch is the default for 1.11 (for nvme)
    - name: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2021-02-05
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.11.0 <1.12.0"
    - name: kope.io/k8s-1.12-debian-stretch-amd64-hvm-ebs-2021-02-05
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.12.0 <1.13.0"
    - name: kope.io/k8s-1.13-debian-stretch-amd64-hvm-ebs-2021-02-05
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.13.0 <1.14.0"
    - name: kope.io/k8s-1.14-debian-stretch-amd64-hvm-ebs-2021-02-05
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.14.0 <1.15.0"
    - name: kope.io/k8s-1.15-debian-stretch-amd64-hvm-ebs-2021-02-05
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.15.0 <1.16.0"
    - name: kope.io/k8s-1.16-debian-stretch-amd64-hvm-ebs-2021-02-05
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.16.0 <1.17.0"
    - name: kope.io/k8s-1.17-debian-stretch-amd64-hvm-ebs-2021-02-05
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.17.0 <1.18.0"
    - name: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20211001
      providerID: aws
      architectureID: amd64
      kubernetesVersion: ">=1.18.0"
    - name: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-arm64-server-20211001
      providerID: aws
      architectureID: arm64
      kubernetesVersion: ">=1.20.0"
    - name: cos-cloud/cos-stable-65-10323-99-0
      providerID: gce
      architectureID: amd64
      kubernetesVersion: "<1.16.0-alpha.1"
    - name: "cos-cloud/cos-stable-77-12371-114-0"
      providerID: gce
      architectureID: amd64
      kubernetesVersion: ">=1.16.0 <1.18.0"
    - name: ubuntu-os-cloud/ubuntu-2004-focal-v20210927
      providerID: gce
      architectureID: amd64
      kubernetesVersion: ">=1.18.0"
    - name: Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:20.04.202110010
      providerID: azure
      architectureID: amd64
      kubernetesVersion: ">=1.20.0"
  cluster:
    kubernetesVersion: v1.5.8
    networking:
      kubenet: {}
  kubernetesVersions:
  - range: ">=1.21.0"
    recommendedVersion: 1.21.5
    requiredVersion: 1.21.0
  - range: ">=1.20.0"
    recommendedVersion: 1.20.11
    requiredVersion: 1.20.0
  - range: ">=1.19.0"
    recommendedVersion: 1.19.15
    requiredVersion: 1.19.0
  - range: ">=1.18.0"
    recommendedVersion: 1.18.20
    requiredVersion: 1.18.0
  - range: ">=1.17.0"
    recommendedVersion: 1.17.17
    requiredVersion: 1.17.0
  - range: ">=1.16.0"
    recommendedVersion: 1.16.15
    requiredVersion: 1.16.0
  - range: ">=1.15.0"
    recommendedVersion: 1.15.12
    requiredVersion: 1.15.0
  - range: ">=1.14.0"
    recommendedVersion: 1.14.10
    requiredVersion: 1.14.0
  - range: ">=1.13.0"
    recommendedVersion: 1.13.12
    requiredVersion: 1.13.0
  - range: ">=1.12.0"
    recommendedVersion: 1.12.10
    requiredVersion: 1.12.0
  - range: ">=1.11.0"
    recommendedVersion: 1.11.10
    requiredVersion: 1.11.0
  - range: "<1.11.0"
    recommendedVersion: 1.11.10
    requiredVersion: 1.11.10
  kopsVersions:
  - range: ">=1.22.0-alpha.1"
    recommendedVersion: "1.22.0-beta.1"
    #requiredVersion: 1.22.0
    kubernetesVersion: 1.22.2
  - range: ">=1.21.0-alpha.1"
    recommendedVersion: "1.21.0"
    #requiredVersion: 1.21.0
    kubernetesVersion: 1.21.5
  - range: ">=1.20.0-alpha.1"
    recommendedVersion: "1.21.0"
    #requiredVersion: 1.20.0
    kubernetesVersion: 1.20.11
  - range: ">=1.19.0-alpha.1"
    recommendedVersion: "1.21.0"
    #requiredVersion: 1.19.0
    kubernetesVersion: 1.19.15
  - range: ">=1.18.0-alpha.1"
    recommendedVersion: "1.21.0"
    #requiredVersion: 1.18.0
    kubernetesVersion: 1.18.20
  - range: ">=1.17.0-alpha.1"
    recommendedVersion: "1.21.0"
    #requiredVersion: 1.17.0
    kubernetesVersion: 1.17.17
  - range: ">=1.16.0-alpha.1"
    recommendedVersion: "1.21.0"
    #requiredVersion: 1.16.0
    kubernetesVersion: 1.16.15
  - range: ">=1.15.0-alpha.1"
    recommendedVersion: "1.21.0"
    #requiredVersion: 1.15.0
    kubernetesVersion: 1.15.12
  - range: ">=1.14.0-alpha.1"
    #recommendedVersion: "1.14.0"
    #requiredVersion: 1.14.0
    kubernetesVersion: 1.14.10
  - range: ">=1.13.0-alpha.1"
    #recommendedVersion: "1.13.0"
    #requiredVersion: 1.13.0
    kubernetesVersion: 1.13.12
  - range: ">=1.12.0-alpha.1"
    recommendedVersion: "1.12.1"
    #requiredVersion: 1.12.0
    kubernetesVersion: 1.12.10
  - range: ">=1.11.0-alpha.1"
    recommendedVersion: "1.11.1"
    #requiredVersion: 1.11.0
    kubernetesVersion: 1.11.10
  - range: "<1.11.0-alpha.1"
    recommendedVersion: "1.11.1"
    #requiredVersion: 1.10.0
    kubernetesVersion: 1.11.10
I1016 16:40:55.389996    8859 s3fs.go:327] Reading file "s3://my-bucket/cluster.k8s.local/instancegroup/master-us-east-2a"
I1016 16:40:55.547529    8859 editor.go:127] Opening file with editor [vi /tmp/kops-edit-4294324936yaml]
I1016 16:41:05.203903    8859 aws_cloud.go:1760] Querying EC2 for all valid zones in region "us-east-2"
I1016 16:41:05.203996    8859 request_logger.go:45] AWS request: ec2/DescribeAvailabilityZones
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x48 pc=0x394490d]

goroutine 1 [running]:
k8s.io/kops/upup/pkg/fi/cloudup.PopulateInstanceGroupSpec(0xc000a2ea00, 0xc000940e00, 0x7f09fdeb31d8, 0xc0000ab680, 0xc000553200, 0x0, 0x0, 0x0)
	upup/pkg/fi/cloudup/populate_instancegroup_spec.go:164 +0x2ed
main.updateInstanceGroup(0x5572650, 0xc000136008, 0x55d4308, 0xc000989040, 0xc000553200, 0xc000a2ea00, 0xc00023ae00, 0xc000940e00, 0x0, 0xc0005b9200, ...)
	cmd/kops/edit_instancegroup.go:291 +0xab
main.RunEditInstanceGroup(0x5572650, 0xc000136008, 0xc00071e9f0, 0x551c7e0, 0xc00013c008, 0xc000599180, 0x0, 0x0)
	cmd/kops/edit_instancegroup.go:268 +0xfe8
main.NewCmdEditInstanceGroup.func2(0xc00094ca00, 0xc00038b3e0, 0x1, 0x3, 0x0, 0x0)
	cmd/kops/edit_instancegroup.go:107 +0x5d
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).execute(0xc00094ca00, 0xc00038b3b0, 0x3, 0x3, 0xc00094ca00, 0xc00038b3b0)
	vendor/github.com/spf13/cobra/command.go:856 +0x472
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x78169e0, 0x7866630, 0x0, 0x0)
	vendor/github.com/spf13/cobra/command.go:974 +0x375
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	vendor/github.com/spf13/cobra/command.go:902
main.Execute()
	cmd/kops/root.go:95 +0x8f
main.main()
	cmd/kops/main.go:20 +0x25

9. Anything else do we need to know?
tested on Ubuntu 18 and Ubuntu 20

@johngmyers
Copy link
Member

Workaround might be to add a containerd: {} field to the cluster spec.

@jivco
Copy link
Author

jivco commented Oct 17, 2021

Workaround might be to add a containerd: {} field to the cluster spec.

Yes. It is working when I add this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
3 participants