Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOCS-2075: Operator IP pool management docs #1390

Merged
merged 8 commits into from
Apr 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,18 @@

If your cluster has Windows nodes and uses custom TLS certificates for log storage then, prior to upgrade, prepare and apply new certificates for [log storage](../../../../operations/comms/log-storage-tls.mdx) that include the required service DNS names.

### Upgrade OwnerReferences

If you do not use OwnerReferences on resources in the projectcalico.org/v3 API group, you can skip this section.

Starting in {{prodname}} v3.19, a change in the way UIDs are generated for projectcalico.org/v3 resources requires that you update any OwnerReferences

Check failure on line 53 in calico-enterprise/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'UIDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'UIDs'?", "location": {"path": "calico-enterprise/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm.mdx", "range": {"start": {"line": 53, "column": 53}}}, "severity": "ERROR"}
that refer to projectcalico.org/v3 resources as an owner. After upgrade, the UID for all projectcalico.org/v3 resources will be changed, resulting in any
owned resources being garbage collected by Kubernetes.

1. Remove any OwnerReferences from resources in your cluster that have `apiGroup: projectcalico.org/v3`.
1. Perform the upgrade normally.
1. Add new OwnerReferences to your resources referencing the new UID.

### Default Deny

{{prodname}} creates a default-deny for the calico-system namespace. If you deploy workloads into the calico-system namespace, you must create policy that allows the required traffic for your workloads prior to upgrade.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,18 @@
Retaining data is only recommended for users that use a valid Elastic license. Trial licenses can get invalidated during
the upgrade.

### Upgrade OwnerReferences

If you do not use OwnerReferences on resources in the projectcalico.org/v3 API group, you can skip this section.

Starting in {{prodname}} v3.19, a change in the way UIDs are generated for projectcalico.org/v3 resources requires that you update any OwnerReferences

Check failure on line 53 in calico-enterprise/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'UIDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'UIDs'?", "location": {"path": "calico-enterprise/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator.mdx", "range": {"start": {"line": 53, "column": 53}}}, "severity": "ERROR"}
that refer to projectcalico.org/v3 resources as an owner. After upgrade, the UID for all projectcalico.org/v3 resources will be changed, resulting in any
owned resources being garbage collected by Kubernetes.

1. Remove any OwnerReferences from resources in your cluster that have `apiGroup: projectcalico.org/v3`.
1. Perform the upgrade normally.
1. Add new OwnerReferences to your resources referencing the new UID.

### Default Deny

{{prodname}} creates a default-deny for the calico-system namespace. If you deploy workloads into the calico-system namespace, you must create policy that allows the required traffic for your workloads prior to upgrade.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,18 @@
Data retention is recommended only for users that have a valid Elasticsearch license. (Trial licenses can be invalidated
during upgrade).

### Upgrade OwnerReferences

If you do not use OwnerReferences on resources in the projectcalico.org/v3 API group, you can skip this section.

Starting in {{prodname}} v3.19, a change in the way UIDs are generated for projectcalico.org/v3 resources requires that you update any OwnerReferences

Check failure on line 52 in calico-enterprise/getting-started/upgrading/upgrading-enterprise/openshift-upgrade.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'UIDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'UIDs'?", "location": {"path": "calico-enterprise/getting-started/upgrading/upgrading-enterprise/openshift-upgrade.mdx", "range": {"start": {"line": 52, "column": 53}}}, "severity": "ERROR"}
that refer to projectcalico.org/v3 resources as an owner. After upgrade, the UID for all projectcalico.org/v3 resources will be changed, resulting in any
owned resources being garbage collected by Kubernetes.

1. Remove any OwnerReferences from resources in your cluster that have `apiGroup: projectcalico.org/v3`.
1. Perform the upgrade normally.
1. Add new OwnerReferences to your resources referencing the new UID.

### Default Deny

{{prodname}} creates a default-deny for the calico-system namespace. If you deploy workloads into the calico-system namespace, you must create policy that allows the required traffic for your workloads prior to upgrade.
Expand Down
4 changes: 2 additions & 2 deletions calico-enterprise/networking/ipam/initial-ippool.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,8 @@ resource and configures the default Calico IP pool. Note the following:
## How to

1. Download the custom-resource.yaml file.
1. Edit the [Installation resource](../../reference/installation/api.mdx#operator.tigera.io/v1.Installation).
**Required values**: `cidr:`
1. Edit the [Installation resource](../../reference/installation/api.mdx#operator.tigera.io/v1.Installation).
**Required values**: `cidr:`
**Empty values**: Defaulted

```bash
Expand Down
115 changes: 115 additions & 0 deletions calico-enterprise/networking/ipam/ippools.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
---
description: Create multiple IP pools
---

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

# Create multiple IP pools

## Understanding multiple IP pools

By default when you install {{prodname}}, a single IPv4 pool is created. This IP pool is used for allocating IP addresses to pods and, if needed,
tunnels within your cluster.

Sometimes you may want to configure additional IP pools. For example:

- If the IP address space available for pods in your cluster is disjointed.
- You want to [assign IP addresses based on cluster topology](assign-ip-addresses-topology.mdx).

## Create multiple IP pools when installing Calico

<Tabs>
<TabItem label="Operator" value="Operator-0">

You can edit the Installation resource within `custom-resources.yaml` to include multiple unique IP pools. The following
example creates two IP pools assigned to different sets of nodes.

```yaml
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
ipPools:
- name: pool-zone-1
cidr: 192.168.0.0/24
encapsulation: VXLAN
nodeSelector: "zone == 'zone-1'"
- name: pool-zone-2
cidr: 192.168.1.0/24
encapsulation: VXLAN
nodeSelector: "zone == 'zone-2'"
```

After installing {{prodname}}, you can confirm the IP pools were created by using the following command:

```bash
kubectl get ippools
```

## Prevent the operator from managing IP pools

In some cases, you may want to disable IP pool management within the operator and instead use calicoctl or kubectl to

Check failure on line 53 in calico-enterprise/networking/ipam/ippools.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'kubectl'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'kubectl'?", "location": {"path": "calico-enterprise/networking/ipam/ippools.mdx", "range": {"start": {"line": 53, "column": 108}}}, "severity": "ERROR"}
create and delete IP pools. To do this, you can edit the **Installation** resource with `custom-resources.yaml` to specify
an empty list of IP pools.

```yaml
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
ipPools: []
```

With this configuration, the operator will wait for you to create IP pools before installing {{prodname}} components.

</TabItem>
<TabItem label="Manifest" value="Manifest-1">

When using manifests to install {{prodname}}, you can use calicoctl to manage multiple IP pools. For complete control, you can disable
creation of the default IP pool before doing so.

1. Disable the default IP pool by adding the following environment variable to the calico-node DaemonSet in `calico.yaml`.

```yaml
env:
- name: NO_DEFAULT_POOLS
value: "true"
```

1. Then, install `calico.yaml`.

1. Create the desired IP pools. For example, the following commands create two IP pools assigned to different sets of nodes.

```bash
calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: pool-zone-1
spec:
cidr: 192.168.0.0/24
vxlanMode: Always
natOutgoing: true
nodeSelector: zone == "zone-1"
EOF
```

```bash
calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: pool-zone-2
spec:
cidr: 192.168.1.0/24
vxlanMode: Always
natOutgoing: true
nodeSelector: zone == "zone-2"
EOF
```

</TabItem>
</Tabs>
121 changes: 116 additions & 5 deletions calico-enterprise/networking/ipam/migrate-pools.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@
# Migrate from one IP pool to another

import DetermineIpam from '@site/calico-enterprise/_includes/content/_determine-ipam.mdx';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

## Big picture

Expand All @@ -28,7 +30,7 @@

**Verify orchestrator support for changing the pod network CIDR**.

Although Kubernetes supports changing the pod network CIDR, not all orchestrators do. For example, OpenShift does not support this feature as described in
Although Kubernetes supports changing the pod network CIDR, not all orchestrators do. For example, OpenShift does not support this feature.

## How to

Expand Down Expand Up @@ -70,7 +72,114 @@
In the following example, we created a Kubernetes cluster using **kubeadm**. But the IP pool CIDR we configured (192.168.0.0/16) doesn't match the
Kubernetes cluster CIDR. Let's change the CIDR to **10.0.0.0/16**, which for the purposes of this example falls within the cluster CIDR.

Let’s run `calicoctl get ippool -o wide` to see the IP pool, **default-ipv4-ippool**.
<Tabs>
<TabItem label="Operator" value="Operator-0">

Let’s run `kubectl get ippools` to see the IP pool, `default-ipv4-ippool`.

```
NAME CREATED AT
default-ipv4-ippool 2024-03-28T16:14:28Z
```

### Step 1: Add a new IP pool

We add a new `IPPool` with the CIDR range, **10.0.0.0/16**.

Add the following to your `default` installation, below the existing IP pool.

```bash
kubectl edit installation default
```

```yaml
- name: new-ipv4-pool
cidr: 10.0.0.0/16
encapsulation: IPIP
```

Let’s verify the new IP pool.

```bash
kubectl get ippools
```

```
NAME CREATED AT
default-ipv4-ippool 2024-03-28T16:14:28Z
test-pool 2024-03-28T18:30:15Z
```

### Step 2: Disable the old IP pool

Edit the `default` installation, and modify the `default-ipv4-ippool` so it no longer selects
any nodes. This prevents IP allocation from the pool.

```bash
kubectl edit installation default
```

```yaml
- name: 192.168.0.0-16
allowedUses:
- Workload
- Tunnel
blockSize: 26
cidr: 192.168.0.0/16
disableBGPExport: false
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
- nodeSelector: all()
+ nodeSelector: "!all()"
```

Apply the changes.

Remember, disabling a pool only affects new IP allocations; networking for existing pods is not affected.

### Step 3: Delete pods from the old IP pool

Next, we delete all of the existing pods from the old IP pool. (In our example, **coredns** is our only pod; for multiple pods you would trigger a deletion for all pods in the cluster.)

```bash
kubectl delete pod -n kube-system coredns-6f4fd4bdf-8q7zp
```

### Step 4: Verify that new pods get an address from the new IP pool

1. Create a test namespace and nginx pod.

Check failure on line 150 in calico-enterprise/networking/ipam/migrate-pools.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'nginx'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'nginx'?", "location": {"path": "calico-enterprise/networking/ipam/migrate-pools.mdx", "range": {"start": {"line": 150, "column": 32}}}, "severity": "ERROR"}

```bash
kubectl create ns ippool-test
```

1. Create an nginx pod.

Check failure on line 156 in calico-enterprise/networking/ipam/migrate-pools.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'nginx'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'nginx'?", "location": {"path": "calico-enterprise/networking/ipam/migrate-pools.mdx", "range": {"start": {"line": 156, "column": 14}}}, "severity": "ERROR"}

```bash
kubectl -n ippool-test create deployment nginx --image nginx
```

1. Verify that the new pod gets an IP address from the new range.

```bash
kubectl -n ippool-test get pods -l app=nginx -o wide
```

1. Clean up the `ippool-test` namespace.

```bash
kubectl delete ns ippool-test
```

### Step 5: Delete the old IP pool

Now that you've verified that pods are getting IPs from the new range, you can safely delete the old pool. To do this,
remove it from the default installation, leaving only the newly create IP pool.

</TabItem>
<TabItem label="Manifest" value="Manifest-1">

Let’s run `calicoctl get ippool -o wide` to see the IP pool, `default-ipv4-ippool`.

```
NAME CIDR NAT IPIPMODE VXLANMODE DISABLED
Expand All @@ -88,7 +197,7 @@

### Step 1: Add a new IP pool

We add a new **IPPool** with the CIDR range, **10.0.0.0/16**.
We add a new `IPPool` with the CIDR range, **10.0.0.0/16**.

```yaml
apiVersion: projectcalico.org/v3
Expand All @@ -105,7 +214,6 @@

```bash
calicoctl get ippool -o wide

```

```
Expand Down Expand Up @@ -208,7 +316,7 @@
kubectl -n ippool-test get pods -l app=nginx -o wide
```

1. Clean up the ippool-test namespace.
1. Clean up the `ippool-test` namespace.

```bash
kubectl delete ns ippool-test
Expand All @@ -222,6 +330,9 @@
kubectl delete ippool default-ipv4-ippool
```

</TabItem>
</Tabs>

## Additional resources

- [IP pools reference](../../reference/resources/ippool.mdx)
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,6 @@ spec:
- cidr: 198.51.100.0/24
```

:::note

the ipPools array can take at most one IPv4 and one IPv6 CIDR, and only takes effect when installing {{prodname}} for the first
time on a given cluster. To add additional pools, see [the IPPool API](../../../reference/resources/ippool.mdx).

:::

### Use VXLAN
Expand Down
Loading
Loading