Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
Co-authored-by: Christopher Tauchen <tauchen@gmail.com>
  • Loading branch information
caseydavenport and ctauchen committed Apr 3, 2024
1 parent 250f4b4 commit 8f92e3e
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 16 deletions.
12 changes: 5 additions & 7 deletions calico/networking/ipam/ippools.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ In some cases, you may want to disable IP pool management within the operator an
create and delete IP pools. To do this, you can edit the **Installation** resource with `custom-resources.yaml` to specify
an empty list of IP pools.

```
```yaml
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
Expand All @@ -73,12 +73,10 @@ creation of the default IP pool before doing so.

1. Disable the default IP pool by adding the following environment variable to the calico-node DaemonSet in `calico.yaml`.

```
env:
- name: NO_DEFAULT_POOLS
value: "true"
```

```yaml
env:
- name: NO_DEFAULT_POOLS
value: "true"
1. Then, install `calico.yaml`.

1. Create the desired IP pools. For example, the following commands create two IP pools assigned to different sets of nodes.
Expand Down
18 changes: 9 additions & 9 deletions calico/networking/ipam/migrate-pools.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ Kubernetes cluster CIDR. Let's change the CIDR to **10.0.0.0/16**, which for the
<Tabs>
<TabItem label="Operator" value="Operator-0">

Let’s run `kubectl get ippools` to see the IP pool, **default-ipv4-ippool**.
Let’s run `kubectl get ippools` to see the IP pool `default-ipv4-ippool`.

```
NAME CREATED AT
Expand All @@ -84,15 +84,15 @@ default-ipv4-ippool 2024-03-28T16:14:28Z

### Step 1: Add a new IP pool

We add a new **IPPool** with the CIDR range, **10.0.0.0/16**.
We add a new `IPPool` resource with the CIDR range `10.0.0.0/16`.

Add the following to your `default` Installation, below the existing IP pool.
Add the following to your `default` installation, below the existing IP pool.

```
```bash
kubectl edit installation default
```

```
```bash
- name: new-ipv4-pool
cidr: 10.0.0.0/16
encapsulation: IPIP
Expand All @@ -115,7 +115,7 @@ test-pool 2024-03-28T18:30:15Z
Edit the `default` installation, and modify `default-ipv4-ippool` so it no longer selects
any nodes. This prevents IP allocation from the pool.

```
```bash
kubectl edit installation default
```

Expand All @@ -139,7 +139,7 @@ Remember, disabling a pool only affects new IP allocations; networking for exist

### Step 3: Delete pods from the old IP pool

Next, we delete all of the existing pods from the old IP pool. (In our example, **coredns** is our only pod; for multiple pods you would trigger a deletion for all pods in the cluster.)
Next, we delete all of the existing pods from the old IP pool. (In our example, `coredns` is our only pod; for multiple pods you would trigger a deletion for all pods in the cluster.)

```bash
kubectl delete pod -n kube-system coredns-6f4fd4bdf-8q7zp
Expand All @@ -165,7 +165,7 @@ kubectl delete pod -n kube-system coredns-6f4fd4bdf-8q7zp
kubectl -n ippool-test get pods -l app=nginx -o wide
```

1. Clean up the ippool-test namespace.
1. Clean up the `ippool-test` namespace.

```bash
kubectl delete ns ippool-test
Expand All @@ -174,7 +174,7 @@ kubectl delete pod -n kube-system coredns-6f4fd4bdf-8q7zp
### Step 5: Delete the old IP pool

Now that you've verified that pods are getting IPs from the new range, you can safely delete the old pool. To do this,
remove it from the default `Installation`, leaving only the newly create IP pool.
remove it from the default installation, leaving only the newly create IP pool.

</TabItem>
<TabItem label="Manifest" value="Manifest-1">
Expand Down

0 comments on commit 8f92e3e

Please sign in to comment.