Skip to content

Commit

Permalink
More feedback applied
Browse files Browse the repository at this point in the history
  • Loading branch information
caseydavenport committed Apr 4, 2024
1 parent db85678 commit bb4e64e
Show file tree
Hide file tree
Showing 3 changed files with 54 additions and 54 deletions.
76 changes: 38 additions & 38 deletions calico-enterprise/networking/ipam/ippools.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@ Sometimes you may want to configure additional IP pools. For example:
<Tabs>
<TabItem label="Operator" value="Operator-0">

You can edit the **Installation** resource within `custom-resources.yaml` to include multiple unique IP pools. The following
You can edit the Installation resource within `custom-resources.yaml` to include multiple unique IP pools. The following
example creates two IP pools assigned to different sets of nodes.

```
```yaml
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
Expand All @@ -42,19 +42,19 @@ spec:
nodeSelector: "zone == 'zone-2'"
```

After installing {{prodname}}, you can confirm the IP pools were created using the following command:
After installing {{prodname}}, you can confirm the IP pools were created by using the following command:

```
```bash
kubectl get ippools
```

## Prevent the operator from managing IP pools

In some cases, you may want to disable IP pool management within the operator and instead use **calicoctl** or **kubectl** to
In some cases, you may want to disable IP pool management within the operator and instead use calicoctl or kubectl to

Check failure on line 53 in calico-enterprise/networking/ipam/ippools.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'kubectl'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'kubectl'?", "location": {"path": "calico-enterprise/networking/ipam/ippools.mdx", "range": {"start": {"line": 53, "column": 108}}}, "severity": "ERROR"}
create and delete IP pools. To do this, you can edit the **Installation** resource with `custom-resources.yaml` to specify
an empty list of IP pools.

```
```yaml
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
Expand All @@ -68,48 +68,48 @@ With this configuration, the operator will wait for you to create IP pools befor
</TabItem>
<TabItem label="Manifest" value="Manifest-1">

When using manifests to install {{prodname}}, you can use **calicoctl** to manage multiple IP pools. For complete control, you can disable
When using manifests to install {{prodname}}, you can use calicoctl to manage multiple IP pools. For complete control, you can disable
creation of the default IP pool before doing so.

1. Disable the default IP pool by adding the following environment variable to the calico-node DaemonSet in `calico.yaml`.

```
env:
- name: NO_DEFAULT_POOLS
value: "true"
```yaml
env:
- name: NO_DEFAULT_POOLS
value: "true"
```

1. Then, install `calico.yaml`.

1. Create the desired IP pools. For example, the following commands create two IP pools assigned to different sets of nodes.

```bash
calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: pool-zone-1
spec:
cidr: 192.168.0.0/24
vxlanMode: Always
natOutgoing: true
nodeSelector: zone == "zone-1"
EOF
```
```bash
calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: pool-zone-2
spec:
cidr: 192.168.1.0/24
vxlanMode: Always
natOutgoing: true
nodeSelector: zone == "zone-2"
EOF
```
```bash
calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: pool-zone-1
spec:
cidr: 192.168.0.0/24
vxlanMode: Always
natOutgoing: true
nodeSelector: zone == "zone-1"
EOF
```
```bash
calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: pool-zone-2
spec:
cidr: 192.168.1.0/24
vxlanMode: Always
natOutgoing: true
nodeSelector: zone == "zone-2"
EOF
```
</TabItem>
</Tabs>
26 changes: 13 additions & 13 deletions calico-enterprise/networking/ipam/migrate-pools.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ Kubernetes cluster CIDR. Let's change the CIDR to **10.0.0.0/16**, which for the
<Tabs>
<TabItem label="Operator" value="Operator-0">

Let’s run `kubectl get ippools` to see the IP pool, **default-ipv4-ippool**.
Let’s run `kubectl get ippools` to see the IP pool, `default-ipv4-ippool`.

```
NAME CREATED AT
Expand All @@ -84,15 +84,15 @@ default-ipv4-ippool 2024-03-28T16:14:28Z

### Step 1: Add a new IP pool

We add a new **IPPool** with the CIDR range, **10.0.0.0/16**.
We add a new `IPPool` with the CIDR range, **10.0.0.0/16**.

Add the following to your `default` Installation, below the existing IP pool.
Add the following to your `default` installation, below the existing IP pool.

```
```bash
kubectl edit installation default
```

```
```yaml
- name: new-ipv4-pool
cidr: 10.0.0.0/16
encapsulation: IPIP
Expand All @@ -112,14 +112,14 @@ test-pool 2024-03-28T18:30:15Z

### Step 2: Disable the old IP pool

Edit the `default` Installation, and modify the **default-ipv4-ippool** such that it no longer selects
Edit the `default` installation, and modify the `default-ipv4-ippool` so it no longer selects
any nodes. This prevents IP allocation from the pool.

```
```bash
kubectl edit installation default
```

```
```yaml
- name: 192.168.0.0-16
allowedUses:
- Workload
Expand Down Expand Up @@ -165,7 +165,7 @@ kubectl delete pod -n kube-system coredns-6f4fd4bdf-8q7zp
kubectl -n ippool-test get pods -l app=nginx -o wide
```

1. Clean up the ippool-test namespace.
1. Clean up the `ippool-test` namespace.

```bash
kubectl delete ns ippool-test
Expand All @@ -174,12 +174,12 @@ kubectl delete pod -n kube-system coredns-6f4fd4bdf-8q7zp
### Step 5: Delete the old IP pool

Now that you've verified that pods are getting IPs from the new range, you can safely delete the old pool. To do this,
remove it from the default `Installation`, leaving only the newly create IP pool.
remove it from the default installation, leaving only the newly create IP pool.

</TabItem>
<TabItem label="Manifest" value="Manifest-1">

Let’s run `calicoctl get ippool -o wide` to see the IP pool, **default-ipv4-ippool**.
Let’s run `calicoctl get ippool -o wide` to see the IP pool, `default-ipv4-ippool`.

```
NAME CIDR NAT IPIPMODE VXLANMODE DISABLED
Expand All @@ -197,7 +197,7 @@ Let’s get started changing this pod to the new IP pool (10.0.0.0/16).

### Step 1: Add a new IP pool

We add a new **IPPool** with the CIDR range, **10.0.0.0/16**.
We add a new `IPPool` with the CIDR range, **10.0.0.0/16**.

```yaml
apiVersion: projectcalico.org/v3
Expand Down Expand Up @@ -316,7 +316,7 @@ kubectl delete pod -n kube-system coredns-6f4fd4bdf-8q7zp
kubectl -n ippool-test get pods -l app=nginx -o wide
```

1. Clean up the ippool-test namespace.
1. Clean up the `ippool-test` namespace.

```bash
kubectl delete ns ippool-test
Expand Down
6 changes: 3 additions & 3 deletions calico/networking/ipam/migrate-pools.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ Add the following to your `default` installation, below the existing IP pool.
kubectl edit installation default
```

```bash
```yaml
- name: new-ipv4-pool
cidr: 10.0.0.0/16
encapsulation: IPIP
Expand Down Expand Up @@ -179,7 +179,7 @@ remove it from the default installation, leaving only the newly create IP pool.
</TabItem>
<TabItem label="Manifest" value="Manifest-1">

Let’s run `calicoctl get ippool -o wide` to see the IP pool, **default-ipv4-ippool**.
Let’s run `calicoctl get ippool -o wide` to see the IP pool, `default-ipv4-ippool`.

```
NAME CIDR NAT IPIPMODE VXLANMODE DISABLED
Expand Down Expand Up @@ -317,7 +317,7 @@ kubectl delete pod -n kube-system coredns-6f4fd4bdf-8q7zp
kubectl -n ippool-test get pods -l app=nginx -o wide
```

1. Clean up the ippool-test namespace.
1. Clean up the `ippool-test` namespace.

```bash
kubectl delete ns ippool-test
Expand Down

0 comments on commit bb4e64e

Please sign in to comment.