-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to calico over weave #11
Comments
Don't object per se to using Calico over Weave - or even having it be configurable - but what do you mean by "to really do anything on Packet"? |
Apologies, probably not the best wording. Because Packet doesn't have a "managed load balancer service" like other public clouds, you'd typically run MetalLB to get ingress into the cluster running on Packet. To run MetalLB you need the BGP configuration, which Weave doesn't offer. There are obviously alternatives like using NodePort, etc., but if the cluster was strapped with Calico (and had the kubeadm pod-network-cidr set to Packet's IP space) it would be more user friendly :) |
Ah that. We have an open issue (and an almost-ready PR) to deploy metallb optionally as part of the ccm deployment. Not everyone wants it deployed automatically, but some do. It is blocked on a packet API issue for IP management, which is in the process of being resolved (I don't own it, so I don't have an ETA :-) ). Once that one is in, we can work with weave+metallb, or calico+metallb. More than happy to get a calico option running here as well. |
Is the IP management issue the BGP enablement you have to manually request? That's what I'm waiting for to get some ingress resources setup right now. +1 for configurable options to specify the CNI. I'm opinionated on Calico for other reasons as well (e.g. like IPIP encap, policy management, Istio integration, etc.) so it's great to have it as a configurable option. |
I am thinking to put it in the apiVersion: "cluster.k8s.io/v1alpha1"
kind: Cluster
metadata:
name: test1-dxi4a
spec:
clusterNetwork:
services:
cidrBlocks: ["172.25.0.0/16"]
pods:
cidrBlocks: ["172.26.0.0/16"]
serviceDomain: "cluster.local"
providerSpec:
value:
apiVersion: "packetprovider/v1alpha1"
kind: "PacketClusterProviderSpec"
projectID: "585f011b-1b0a-4696-b466-5e42ecce0a33"
caKeyPair:
cert: ""
key: "" just adding it to apiVersion: "cluster.k8s.io/v1alpha1"
kind: Cluster
metadata:
name: test1-dxi4a
spec:
clusterNetwork:
services:
cidrBlocks: ["172.25.0.0/16"]
pods:
cidrBlocks: ["172.26.0.0/16"]
serviceDomain: "cluster.local"
providerSpec:
value:
apiVersion: "packetprovider/v1alpha1"
cni: "calico" # or "weave" or whatever is supported
kind: "PacketClusterProviderSpec"
projectID: "585f011b-1b0a-4696-b466-5e42ecce0a33"
caKeyPair:
cert: ""
key: ""
I can cut both ways. I do like weave's simplicity, and have been using it for longer. But I met the original Calico engineers back in their metaswitch days, did performance testing on it for linuxcon in Berlin and Tokyo a few years back (when we could travel safely...) and loved it, and did a lot of the multi-arch work on it. So, yeah, it has a special place in my heart. :-) Going to get that in asap. |
Nice! One other suggestion as you're adding that functionality (that I'm certain you may already be aware of): it would be great if you modified the default Calico manifest during the apply to match the pod CIDR passed in the cluster spec. If the pod CIDR in the cluster spec is leveraging 172.16.0.0/12, the CNI is initialized using the same. Reference: https://github.com/packet-labs/kubernetes-bgp#calico It's possible to change it after the fact, obviously, but it can be cumbersome if kubeadm (and subsequently kube-proxy) are created with one pod CIDR block and the CNI uses the default 192.xxx |
Definitively. One of the nice things about Weave is how it has that cloud service that lets you generate the manifest with the right changes. Calico doesn't have it, but we can make it happen. |
the cluster is initialized with Weave's CNI but to really do anything on Packet you need Calico for BGP + MetalLB
The text was updated successfully, but these errors were encountered: