Skip to content

Commit

Permalink
Merge pull request #3129 from cyclinder/cherry_pick_0.9
Browse files Browse the repository at this point in the history
Spidercoordinator: It able to get CIDR from kubeadm-config
  • Loading branch information
cyclinder committed Jan 25, 2024
2 parents 951959a + 884a6a4 commit db773f7
Show file tree
Hide file tree
Showing 7 changed files with 310 additions and 55 deletions.
27 changes: 24 additions & 3 deletions docs/usage/install/overlay/get-started-calico-zh_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,10 +88,31 @@ status:
serviceCIDR:
- 10.233.0.0/18
```

> 目前 Spiderpool 优先通过查询 `kube-system/kubeadm-config` ConfigMap 获取集群的 Pod 和 Service 子网。 如果 kubeadm-config 不存在导致无法获取集群子网,那么 Spiderpool 会从 Kube-controller-manager Pod 中获取集群 Pod 和 Service 的子网。 如果您集群的 Kube-controller-manager 组件以 `systemd` 方式而不是以静态 Pod 运行。那么 Spiderpool 仍然无法获取集群的子网信息。
> 1.如果 phase 不为 Synced, 那么将会阻止 Pod 被创建
>
> 2.如果 overlayPodCIDR 不正常, 可能会导致通信问题
如果上面两种方式都失败,Spiderpool 会同步 status.phase 为 NotReady, 这将会阻止 Pod 被创建。我们可以通过下面解决异常情况:

- 手动创建 kubeadm-config ConfigMap, 并正确配置集群的子网信息:

```shell
export POD_SUBNET=<YOUR_POD_SUBNET>
export SERVICE_SUBNET=<YOUR_SERVICE_SUBNET>
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration:
networking:
podSubnet: ${POD_SUBNET}
serviceSubnet: ${SERVICE_SUBNET}
EOF
```

一旦创建完成,Spiderpool 将会自动同步其状态。

### 创建 SpiderIPPool

Expand Down
27 changes: 24 additions & 3 deletions docs/usage/install/overlay/get-started-calico.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,9 +84,30 @@ status:
- 10.233.0.0/18
```

> 1.If the phase is not synced, the pod will be prevented from being created.
>
> 2.If the overlayPodCIDR does not meet expectations, it may cause pod communication issue.
> At present, Spiderpool prioritizes obtaining the cluster's Pod and Service subnets by querying the kube-system/kubeadm-config ConfigMap. If the kubeadm-config does not exist, causing the failure to obtain the cluster subnet, Spiderpool will attempt to retrieve the cluster Pod and Service subnets from the kube-controller-manager Pod. If the kube-controller-manager component in your cluster runs in systemd mode instead of as a static Pod, Spiderpool still cannot retrieve the cluster's subnet information.
If both of the above methods fail, Spiderpool will synchronize the status.phase as NotReady, preventing Pod creation. To address such abnormal situations, we can take either of the following approaches:

- Manually create the kubeadm-config ConfigMap and correctly configure the cluster's subnet information:

```shell
export POD_SUBNET=<YOUR_POD_SUBNET>
export SERVICE_SUBNET=<YOUR_SERVICE_SUBNET>
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration:
networking:
podSubnet: ${POD_SUBNET}
serviceSubnet: ${SERVICE_SUBNET}
EOF
```

Once created, Spiderpool will automatically synchronize its status.

### Create SpiderIPPool

Expand Down
27 changes: 24 additions & 3 deletions docs/usage/install/overlay/get-started-cilium-zh_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,9 +85,30 @@ status:
- 10.233.0.0/18
```

> 1.如果 phase 不为 Synced, 那么将会阻止 Pod 被创建
>
> 2.如果 overlayPodCIDR 不正常, 可能会导致通信问题
> 目前 Spiderpool 优先通过查询 `kube-system/kubeadm-config` ConfigMap 获取集群的 Pod 和 Service 子网。 如果 kubeadm-config 不存在导致无法获取集群子网,那么 Spiderpool 会从 Kube-controller-manager Pod 中获取集群 Pod 和 Service 的子网。 如果您集群的 Kube-controller-manager 组件以 `systemd` 方式而不是以静态 Pod 运行。那么 Spiderpool 仍然无法获取集群的子网信息。
如果上面两种方式都失败,Spiderpool 会同步 status.phase 为 NotReady, 这将会阻止 Pod 被创建。我们可以通过下面的方式解决异常情况:

- 手动创建 kubeadm-config ConfigMap, 并正确配置集群的子网信息:

```shell
export POD_SUBNET=<YOUR_POD_SUBNET>
export SERVICE_SUBNET=<YOUR_SERVICE_SUBNET>
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration: |
networking:
podSubnet: ${POD_SUBNET}
serviceSubnet: ${SERVICE_SUBNET}
EOF
```

一旦创建完成,Spiderpool 将会自动同步其状态。

### 创建 SpiderIPPool

Expand Down
27 changes: 24 additions & 3 deletions docs/usage/install/overlay/get-started-cilium.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,9 +85,30 @@ status:
- 10.233.0.0/18
```

> 1.If the phase is not synced, the pod will be prevented from being created.
>
> 2.If the overlayPodCIDR does not meet expectations, it may cause pod communication issue.
> At present, Spiderpool prioritizes obtaining the cluster's Pod and Service subnets by querying the kube-system/kubeadm-config ConfigMap. If the kubeadm-config does not exist, causing the failure to obtain the cluster subnet, Spiderpool will attempt to retrieve the cluster Pod and Service subnets from the kube-controller-manager Pod. If the kube-controller-manager component in your cluster runs in systemd mode instead of as a static Pod, Spiderpool still cannot retrieve the cluster's subnet information.
If both of the above methods fail, Spiderpool will synchronize the status.phase as NotReady, preventing Pod creation. To address such abnormal situations, we can take either of the following approaches:

- Manually create the kubeadm-config ConfigMap and correctly configure the cluster's subnet information:

```shell
export POD_SUBNET=<YOUR_POD_SUBNET>
export SERVICE_SUBNET=<YOUR_SERVICE_SUBNET>
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration:
networking:
podSubnet: ${POD_SUBNET}
serviceSubnet: ${SERVICE_SUBNET}
EOF
```

Once created, Spiderpool will automatically synchronize its status.

### Create SpiderIPPool

Expand Down
91 changes: 61 additions & 30 deletions pkg/coordinatormanager/coordinator_informer.go
Original file line number Diff line number Diff line change
Expand Up @@ -334,36 +334,8 @@ func (cc *CoordinatorController) syncHandler(ctx context.Context, coordinatorNam
}

func (cc *CoordinatorController) fetchPodAndServerCIDR(ctx context.Context, logger *zap.Logger, coordCopy *spiderpoolv2beta1.SpiderCoordinator) (*spiderpoolv2beta1.SpiderCoordinator, error) {
var err error
var cmPodList corev1.PodList
if err := cc.APIReader.List(ctx, &cmPodList, client.MatchingLabels{"component": "kube-controller-manager"}); err != nil {
event.EventRecorder.Eventf(
coordCopy,
corev1.EventTypeWarning,
"ClusterNotReady",
err.Error(),
)

setStatus2NoReady(logger, coordCopy)
return coordCopy, err
}
if len(cmPodList.Items) == 0 {
msg := `Failed to get kube-controller-manager Pod with label "component: kube-controller-manager"`
event.EventRecorder.Eventf(
coordCopy,
corev1.EventTypeWarning,
"ClusterNotReady",
msg,
)

setStatus2NoReady(logger, coordCopy)
return coordCopy, err
}

k8sPodCIDR, k8sServiceCIDR := extractK8sCIDR(&cmPodList.Items[0])
if *coordCopy.Spec.PodCIDRType == auto {
var podCidrType string
podCidrType, err = fetchType(cc.DefaultCniConfDir)
podCidrType, err := fetchType(cc.DefaultCniConfDir)
if err != nil {
if apierrors.IsNotFound(err) {
event.EventRecorder.Eventf(
Expand All @@ -381,6 +353,30 @@ func (cc *CoordinatorController) fetchPodAndServerCIDR(ctx context.Context, logg
coordCopy.Spec.PodCIDRType = &podCidrType
}

var err error
var cm *corev1.ConfigMap
var k8sPodCIDR, k8sServiceCIDR []string
if err := cc.APIReader.Get(ctx, types.NamespacedName{Namespace: metav1.NamespaceSystem, Name: "kubeadm-config"}, cm); err == nil && cm != nil {
logger.Sugar().Infof("Trying to fetch the ClusterCIDR from kube-system/kubeadm-config")
k8sPodCIDR, k8sServiceCIDR = ExtractK8sCIDRFromKubeadmConfigMap(cm)
} else {
logger.Sugar().Warn("kube-system/kubeadm-config is no found, trying to fetch the ClusterCIDR from kube-controller-manager Pod")
var cmPodList corev1.PodList
err = cc.APIReader.List(ctx, &cmPodList, client.MatchingLabels{"component": "kube-controller-manager"})
if err != nil {
logger.Sugar().Errorf("failed to get kube-controller-manager Pod with label \"component: kube-controller-manager\": %v", err)
event.EventRecorder.Eventf(
coordCopy,
corev1.EventTypeWarning,
"ClusterNotReady",
"Neither kubeadm-config ConfigMap nor kube-controller-manager Pod can be found",
)
setStatus2NoReady(logger, coordCopy)
return coordCopy, err
}
k8sPodCIDR, k8sServiceCIDR = ExtractK8sCIDRFromKCMPod(&cmPodList.Items[0])
}

switch *coordCopy.Spec.PodCIDRType {
case cluster:
if cc.caliCtrlCanncel != nil {
Expand Down Expand Up @@ -538,7 +534,42 @@ func (cc *CoordinatorController) fetchCiliumCIDR(ctx context.Context, logger *za
return nil
}

func extractK8sCIDR(kcm *corev1.Pod) ([]string, []string) {
func ExtractK8sCIDRFromKubeadmConfigMap(cm *corev1.ConfigMap) ([]string, []string) {
var podCIDR, serviceCIDR []string

podReg := regexp.MustCompile(`podSubnet: (.*)`)
serviceReg := regexp.MustCompile(`serviceSubnet: (.*)`)

var podSubnets, serviceSubnets []string
for _, data := range cm.Data {
podSubnets = podReg.FindStringSubmatch(data)
serviceSubnets = serviceReg.FindStringSubmatch(data)
}

if len(podSubnets) != 0 {
for _, cidr := range strings.Split(podSubnets[1], ",") {
_, _, err := net.ParseCIDR(cidr)
if err != nil {
continue
}
podCIDR = append(podCIDR, cidr)
}
}

if len(serviceSubnets) != 0 {
for _, cidr := range strings.Split(serviceSubnets[1], ",") {
_, _, err := net.ParseCIDR(cidr)
if err != nil {
continue
}
serviceCIDR = append(serviceCIDR, cidr)
}
}

return podCIDR, serviceCIDR
}

func ExtractK8sCIDRFromKCMPod(kcm *corev1.Pod) ([]string, []string) {
var podCIDR, serviceCIDR []string

podReg := regexp.MustCompile(`--cluster-cidr=(.*)`)
Expand Down
1 change: 1 addition & 0 deletions test/doc/spidercoodinator.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,4 @@
| V00006 | status.phase is not-ready, expect the cidr of status to be empty | p3 | | done | |
| V00007 | spidercoordinator has the lowest priority | p3 | | done | |
| V00008 | status.phase is not-ready, pods will fail to run | p3 | | done | |
| V00009 | it can get the clusterCIDR from kubeadmConfig or kube-controller-manager pod | p3 | | done|
Loading

0 comments on commit db773f7

Please sign in to comment.