-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add options for configuring IPv4 and IPv6 support with Calico #11688
Conversation
is there ipv6 CI job? |
Not yet. There are a few things left on another PR and after that will try to get a CI job to run. |
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not blocking, but it would be good if we could automatically set the IPv*Support fields based on a cluster-level setting as to whether the cluster is v4-only, dual-stack, or v6-only.
{{- if (WithDefaultBool .Networking.Calico.IPv6Support false) }} | ||
- name: CALICO_IPV6POOL_CIDR | ||
value: "{{ .KubeControllerManager.ClusterCIDR }}" | ||
- name: CALICO_IPV6POOL_NAT_OUTGOING |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to NAT outgoing IPv6? Could we not assign a routable CIDR to the pool? Or does that require mucking about with ENI mode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a few considerations here:
- default behavior for Calico is to do outgoing NAT
- this is a setting that will be applied for the default IPv6 Pool created on first start.
- I don't see how assigning a routable CIDR would be possible until AWS adds a routable IPv6 CIDR blocks to each node.
With these in mind, seems a safe default for now. It can easily be changed after initial start and, once there is some way to use routable CIRDS, it can be split into a separate option.
@@ -84,7 +84,7 @@ data: | |||
ttl 30 | |||
} | |||
prometheus :9153 | |||
forward . /etc/resolv.conf { | |||
forward . {{ or (join KubeDNS.UpstreamNameservers " ") "/etc/resolv.conf" }} { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this an unrelated change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, missing feature from Coredns that you need in order to not muck with the node-level DNS? This should perhaps be a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about making this a separate PR, but it was a one liner, related to current work.
As AWS DNS servers don't support IPv6, this was the easiest way to get CoreDNS to work.
@@ -3857,8 +3861,17 @@ spec: | |||
# The default IPv4 pool to create on startup if none exists. Pod IPs will be | |||
# chosen from this range. Changing this value after installation will have | |||
# no effect. This should fall within `--cluster-cidr`. | |||
{{- if (WithDefaultBool .Networking.Calico.IPv6Support false) }} | |||
- name: CALICO_IPV6POOL_CIDR | |||
value: "{{ .KubeControllerManager.ClusterCIDR }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to https://kubernetes.io/docs/concepts/services-networking/dual-stack/ the KCM --cluster-cidr
is, in dual-stack, an IPv4 CIDR and an IPv6 CIDR, separated by a comma. It doesn't look like calico/node accepts that syntax in either CALICO_IPV6POOL_CIDR or CALICO_IPV4POOL_CIDR.
I think we'd need a template function to parse ClusterCIDR
into its two components.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I'm okay with not supporting dual-stack in the ClusterCIDR. We should probably have API validation around that, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now, single stack ClusterCIDR is ok from my point of view too.
Dual-stack is also blocked by kubernetes/cloud-provider-aws#230, as the kubelet expects the cloud provider to return both IPv4 and IPv6 for the node.
upup/models/cloudup/resources/addons/networking.projectcalico.org/k8s-1.16.yaml.template
Show resolved
Hide resolved
I think the only thing blocking is API validation to prohibit the combination of Calico and dual-stack ClusterCIDR. |
Okay, existing ClusterCIDR validation prohibits multiple CIDRs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of these comments can be followup work.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: johngmyers The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Suggest API validation to prohibit enabling both IPv4 and IPv6 for Calico. |
Hrm, perhaps a single, tri-state setting? With values 'ipv4', 'ipv6', 'dual', default 'ipv4'. |
Or key off of how |
I think this may be a good way to pull the defaults, but would sill allow overriding for experiments. |
In other news, I ran the Conformance tests manually on a fresh cluster and got the following failures:
These are happening because the tests try to connect to the IPv4 address of the service exposed via NodePort. If the tests would connect to the IPv6 address instead, they would pass (tested by setting The cloud provider also only provides the IPv4 address for the node, so this is an expected failure. |
In retrospect, there shouldn't be |
I agree, but this may allow some use cases like some fake pod dual-stack that works at node level. |
The Calico templates don't support dual-stack. For example, Advanced configurations will undoubtedly require code to configure all the things needed for the particular use case. The fields don't make sense to expose. They just clutter up the API. |
I think we might need code in |
This is more of a quote from the docs which are lacking a lot of details when it comes to IPv6. "hash" means a CRC32 of the hostname, which contains the IP. It has nothing to do with how dual-stack is configured in Calico. Calico allows lots of config using calicoctl, outside of the daemonset template, including creating new IP pools. kOps only configures the default IP pool, which is based on ClusterCIDR.
I am aware of that option and will take it into consideration, weather to support it or consider it mutually exclusive with IPv6. |
Sample config to get started with AWS, Calico and IPv6, using a kOps cluster with public topology:
Ref: #8432
/cc @justinsb @johngmyers @aojea