Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to create new kOps cluster in OpenStack #12980

Closed
zetaab opened this issue Dec 16, 2021 · 5 comments · Fixed by #13032
Closed

Unable to create new kOps cluster in OpenStack #12980

zetaab opened this issue Dec 16, 2021 · 5 comments · Fixed by #13032
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@zetaab
Copy link
Member

zetaab commented Dec 16, 2021

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

master

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

1.22.3

3. What cloud provider are you using?

openstack

4. What commands did you run? What is the simplest way to reproduce this issue?

kops create cluster \
  --cloud openstack \
  --name draintest.k8s.local \
  --state ${KOPS_STATE_STORE} \
  --zones x,y,z \
  --network-cidr 10.2.0.0/16 \
  --image debian-11-221121-devops \
  --master-count=3 \
  --node-count=2 \
  --node-size m1.medium \
  --master-size m1.medium \
  --etcd-storage-type solidfire \
  --topology private \
  --bastion \
  --networking calico \
  --api-loadbalancer-type public \
  --os-octavia=true \
  --os-ext-net x-nap \
  --os-ext-subnet ext-ha-v4 \
  --os-lb-floating-subnet ext-ha-v4

kops update cluster --name draintest.k8s.local --yes --admin

5. What happened after the commands executed?

I can see following in logs

W1216 11:12:59.589334   56693 executor.go:139] error running task "SecurityGroupRule/IPv4-ingress-tcp-from-bastion.draintest.k8s.local-to-::/0-22-22" (9m42s remaining to succeed): error creating SecurityGroupRule in SG bastion.draintest.k8s.local: error creating security group rule {ingress  IPv4 4c7c0afd-7131-483f-bb9e-d5b505dd7db2 22 22 tcp  ::/0 }: Bad request with: [POST https://sdc.elisa.fi:13696/v2.0/security-group-rules], error message: {"NeutronError": {"type": "SecurityGroupRuleParameterConflict", "message": "Conflicting value ethertype IPv4 for CIDR ::/0", "detail": ""}}
I1216 11:12:59.589382   56693 executor.go:111] Tasks: 111 done / 150 total; 14 can run
W1216 11:13:15.975283   56693 executor.go:139] error running task "SecurityGroupRule/IPv4-ingress-tcp-from-bastion.draintest.k8s.local-to-::/0-22-22" (9m25s remaining to succeed): error creating SecurityGroupRule in SG bastion.draintest.k8s.local: error creating security group rule {ingress  IPv4 4c7c0afd-7131-483f-bb9e-d5b505dd7db2 22 22 tcp  ::/0 }: Bad request with: [POST https://sdc.elisa.fi:13696/v2.0/security-group-rules], error message: {"NeutronError": {"type": "SecurityGroupRuleParameterConflict", "message": "Conflicting value ethertype IPv4 for CIDR ::/0", "detail": ""}}
I1216 11:13:15.975329   56693 executor.go:111] Tasks: 124 done / 150 total; 4 can run

the problem is that kOps nowadays adds ::0 under spec.kubernetesApiAccess and spec.sshAccess by default. However, it does not work if we do not have ipv6 enabled in the OpenStack network.

The temporary solution to fix this is to write kops edit cluster and remove those ::0 rules. After that kops update cluster works.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 16, 2021
@ching-kuo
Copy link
Contributor

ching-kuo commented Dec 19, 2021

Hi,

I'm also hitting this issue and I'm currently also using the temporary solution that you mentioned.
It seems to be caused by this PR which makes IPv6 access by default
#11763

This will currently affect security group rule and load balancer listener creation.

I'm currently not sure the direction on how to fix this yet

@olemarkus
Copy link
Member

I am thinking InitDefaults should only add IPv6 where it does no harm. So test for AWS/GCP before adding the IPv6 item.

/cc @johngmyers

@zetaab
Copy link
Member Author

zetaab commented Dec 19, 2021

is there mechanism in kOps config where we could check is ipv6 enabled or not? We could use that to detect do we need add ::0 or not

@johngmyers
Copy link
Member

I am thinking the cloud provider code should be handling this, omitting IPv6 addresses from cloud-specific firewall APIs that don't accept them. It is the place that has the most context about what clouds or parts of the cloud support IPv6.

@johngmyers
Copy link
Member

"is ipv6 enabled" is not a simple binary question. There is the IsIPv6Only() receiver, but that only returns information about the pod network. It is perfectly reasonable for a cluster with an IPv4-only pod network to receive connections to the Kubernetes API over IPv6.

ching-kuo added a commit to ching-kuo/kops that referenced this issue Dec 25, 2021
When creating security group, the default EtherType is IPv4. Currently
AdminAccess default includes IPv6 CIDR, in this case EtherType should be
changed to IPv6.

This commit fixed this issue by checking if the CIDR is IPv6 before
creating the rule, if it is, EtherType will be changed to IPv6.

Fixes: kubernetes#12980
ching-kuo added a commit to ching-kuo/kops that referenced this issue Dec 25, 2021
When creating security group rule, the default EtherType is IPv4.
Currently, AdminAccess default includes IPv6 CIDR, in this case
EtherType should be changed to IPv6.

This commit fixed this issue by checking if the CIDR is IPv6 before
creating the rule, if it is, EtherType will be changed to IPv6.

Fixes: kubernetes#12980
ching-kuo added a commit to ching-kuo/kops that referenced this issue Dec 25, 2021
When creating security group rule, the default EtherType is IPv4.
Currently, AdminAccess default includes IPv6 CIDR, in this case
EtherType should be changed to IPv6.

This commit fixed this issue by checking if the CIDR is IPv6 before
creating the rule, if it is, EtherType will be changed to IPv6.

Fixes: kubernetes#12980
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants