Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: update CI Vagrant VM IP addresses #17900

Merged
merged 1 commit into from
Nov 17, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion Documentation/concepts/security/proxy/envoy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -529,7 +529,7 @@ and adding the ''--debug-verbose=flow'' flag.

$ sudo service cilium stop

$ sudo /usr/bin/cilium-agent --debug --ipv4-range 10.11.0.0/16 --kvstore-opt consul.address=192.168.33.11:8500 --kvstore consul -t vxlan --fixed-identity-mapping=128=kv-store --fixed-identity-mapping=129=kube-dns --debug-verbose=flow
$ sudo /usr/bin/cilium-agent --debug --ipv4-range 10.11.0.0/16 --kvstore-opt consul.address=192.168.60.11:8500 --kvstore consul -t vxlan --fixed-identity-mapping=128=kv-store --fixed-identity-mapping=129=kube-dns --debug-verbose=flow


Step 13: Add Runtime Tests
Expand Down
20 changes: 10 additions & 10 deletions Documentation/gettingstarted/egress-gateway.rst
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ cluster, and use it as the destination of the egress traffic.
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-04-04 21:58:57 UTC; 1min 3s ago
[...]
$ curl http://192.168.33.13:80 # Assume 192.168.33.13 is the external IP of the node
$ curl http://192.168.60.13:80 # Assume 192.168.60.13 is the external IP of the node
[...]
<title>Welcome to nginx!</title>
[...]
Expand All @@ -106,7 +106,7 @@ the configurations specified in the CiliumEgressNATPolicy.
NAME READY STATUS RESTARTS AGE
pod/mediabot 1/1 Running 0 14s

$ kubectl exec mediabot -- curl http://192.168.33.13:80
$ kubectl exec mediabot -- curl http://192.168.60.13:80
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
[...]

Expand All @@ -118,17 +118,17 @@ will contain something like the following:

$ tail /var/log/nginx/access.log
[...]
192.168.33.11 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1"
192.168.60.11 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1"

In the previous example, the client pod is running on the node ``192.168.33.11``, so the result makes sense.
In the previous example, the client pod is running on the node ``192.168.60.11``, so the result makes sense.
This is the default Kubernetes behavior without egress NAT.

Configure Egress IPs
====================

Deploy the following deployment to assign additional egress IP to the gateway node. The node that runs the
pod will have additional IP addresses configured on the external interface (``enp0s8`` as in the example),
and become the egress gateway. In the following example, ``192.168.33.100`` and ``192.168.33.101`` becomes
and become the egress gateway. In the following example, ``192.168.60.100`` and ``192.168.60.101`` becomes
the egress IP which can be consumed by Egress NAT Policy. Please make sure these IP addresses are routable
on the interface they are assigned to, otherwise the return traffic won't be able to route back.

Expand All @@ -139,8 +139,8 @@ Create Egress NAT Policy

Apply the following Egress NAT Policy, which basically means: when the pod is running in the namespace
``default`` and the pod itself has label ``org: empire`` and ``class: mediabot``, if it's trying to talk to
IP CIDR ``192.168.33.13/32``, then use egress IP ``192.168.33.100``. In this example, it tells Cilium to
forward the packet from client pod to the gateway node with egress IP ``192.168.33.100``, and masquerade
IP CIDR ``192.168.60.13/32``, then use egress IP ``192.168.60.100``. In this example, it tells Cilium to
forward the packet from client pod to the gateway node with egress IP ``192.168.60.100``, and masquerade
with that IP address.

.. literalinclude:: ../../examples/kubernetes-egress-gateway/egress-nat-policy.yaml
Expand All @@ -149,17 +149,17 @@ Let's switch back to the client pod and verify it works.

.. code-block:: shell-session

$ kubectl exec mediabot -- curl http://192.168.33.13:80
$ kubectl exec mediabot -- curl http://192.168.60.13:80
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
[...]

Verify access log from nginx node or service of your chose that the request is coming from egress IP now
instead of one of the nodes in Kubernetes cluster. In the nginx's case, you will see logs like the
following shows that the request is coming from ``192.168.33.100`` now, instead of ``192.168.33.11``.
following shows that the request is coming from ``192.168.60.100`` now, instead of ``192.168.60.11``.

.. code-block:: shell-session

$ tail /var/log/nginx/access.log
[...]
192.168.33.100 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1"
192.168.60.100 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1"

6 changes: 3 additions & 3 deletions Documentation/gettingstarted/encryption-wireguard.rst
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ commands can be helpful:
"10.154.1.107/32",
"10.154.1.195/32"
],
"endpoint": "192.168.34.12:51871",
"endpoint": "192.168.61.12:51871",
"last-handshake-time": "2021-05-05T12:31:24.418Z",
"public-key": "RcYfs/GEkcnnv6moK5A1pKnd+YYUue21jO9I08Bv0zo="
}
Expand All @@ -179,7 +179,7 @@ commands can be helpful:
"10.154.2.103/32",
"10.154.2.142/32"
],
"endpoint": "192.168.34.11:51871",
"endpoint": "192.168.61.11:51871",
"last-handshake-time": "2021-05-05T12:31:24.631Z",
"public-key": "DrAc2EloK45yqAcjhxerQKwoYUbLDjyrWgt9UXImbEY="
}
Expand Down Expand Up @@ -228,4 +228,4 @@ The current status of these limitations is tracked in :gh-issue:`15462`.
Legal
=====

"WireGuard" is a registered trademark of Jason A. Donenfeld.
"WireGuard" is a registered trademark of Jason A. Donenfeld.
4 changes: 2 additions & 2 deletions Documentation/gettingstarted/host-firewall.rst
Original file line number Diff line number Diff line change
Expand Up @@ -122,8 +122,8 @@ breakages.
.. code-block:: shell-session

$ kubectl -n $CILIUM_NAMESPACE exec $CILIUM_POD_NAME -- cilium monitor -t policy-verdict --related-to $HOST_EP_ID
Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 1, ingress, action allow, match L3-Only, 192.168.33.12 -> 192.168.33.11 EchoRequest
Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 6, ingress, action allow, match L3-Only, 192.168.33.12:37278 -> 192.168.33.11:2379 tcp SYN
Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 1, ingress, action allow, match L3-Only, 192.168.60.12 -> 192.168.60.11 EchoRequest
Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 6, ingress, action allow, match L3-Only, 192.168.60.12:37278 -> 192.168.60.11:2379 tcp SYN
Policy verdict log: flow 0x0 local EP ID 1687, remote ID 2, proto 6, ingress, action audit, match none, 10.0.2.2:47500 -> 10.0.2.15:6443 tcp SYN

For details on how to derive the network policies from the output of ``cilium
Expand Down
2 changes: 1 addition & 1 deletion Documentation/gettingstarted/ipam-cluster-pool.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Validate installation
.. code-block:: shell-session

$ cilium status --all-addresses
KVStore: Ok etcd: 1/1 connected, has-quorum=true: https://192.168.33.11:2379 - 3.3.12 (Leader)
KVStore: Ok etcd: 1/1 connected, has-quorum=true: https://192.168.60.11:2379 - 3.3.12 (Leader)
[...]
IPAM: IPv4: 2/256 allocated,
Allocated addresses:
Expand Down
2 changes: 1 addition & 1 deletion Documentation/gettingstarted/ipam-crd.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Create a CiliumNode CR
.. code-block:: shell-session

$ cilium status --all-addresses
KVStore: Ok etcd: 1/1 connected, has-quorum=true: https://192.168.33.11:2379 - 3.3.12 (Leader)
KVStore: Ok etcd: 1/1 connected, has-quorum=true: https://192.168.60.11:2379 - 3.3.12 (Leader)
[...]
IPAM: IPv4: 2/4 allocated,
Allocated addresses:
Expand Down
8 changes: 4 additions & 4 deletions Documentation/gettingstarted/local-redirect-policy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -449,7 +449,7 @@ security credentials for pods.
You can verify this by running a curl command to the AWS metadata server from
one of the application pods, and tcpdump command on the same EKS cluster node as the
pod. Following is an example output, where ``192.169.98.118`` is the ip
address of an application pod, and ``192.168.33.99`` is the ip address of the
address of an application pod, and ``192.168.60.99`` is the ip address of the
kiam agent running on the same node as the application pod.

.. code-block:: shell-session
Expand All @@ -467,11 +467,11 @@ security credentials for pods.

.. code-block:: shell-session

$ sudo tcpdump -i any -enn "(port 8181) and (host 192.168.33.99 and 192.168.98.118)"
$ sudo tcpdump -i any -enn "(port 8181) and (host 192.168.60.99 and 192.168.98.118)"
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
05:16:05.229597 In de:e4:e9:94:b5:9f ethertype IPv4 (0x0800), length 76: 192.168.98.118.47934 > 192.168.33.99.8181: Flags [S], seq 669026791, win 62727, options [mss 8961,sackOK,TS val 2539579886 ecr 0,nop,wscale 7], length 0
05:16:05.229657 Out 56:8f:62:18:6f:85 ethertype IPv4 (0x0800), length 76: 192.168.33.99.8181 > 192.168.98.118.47934: Flags [S.], seq 2355192249, ack 669026792, win 62643, options [mss 8961,sackOK,TS val 4263010641 ecr 2539579886,nop,wscale 7], length 0
05:16:05.229597 In de:e4:e9:94:b5:9f ethertype IPv4 (0x0800), length 76: 192.168.98.118.47934 > 192.168.60.99.8181: Flags [S], seq 669026791, win 62727, options [mss 8961,sackOK,TS val 2539579886 ecr 0,nop,wscale 7], length 0
05:16:05.229657 Out 56:8f:62:18:6f:85 ethertype IPv4 (0x0800), length 76: 192.168.60.99.8181 > 192.168.98.118.47934: Flags [S.], seq 2355192249, ack 669026792, win 62643, options [mss 8961,sackOK,TS val 4263010641 ecr 2539579886,nop,wscale 7], length 0

Miscellaneous
=============
Expand Down
10 changes: 5 additions & 5 deletions Documentation/operations/troubleshooting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ e.g.:
.. code-block:: shell-session

$ cilium status
KVStore: Ok etcd: 1/1 connected: https://192.168.33.11:2379 - 3.2.7 (Leader)
KVStore: Ok etcd: 1/1 connected: https://192.168.60.11:2379 - 3.2.7 (Leader)
ContainerRuntime: Ok
Kubernetes: Ok OK
Kubernetes APIs: ["core/v1::Endpoint", "extensions/v1beta1::Ingress", "core/v1::Node", "CustomResourceDefinition", "cilium/v2::CiliumNetworkPolicy", "networking.k8s.io/v1::NetworkPolicy", "core/v1::Service"]
Expand Down Expand Up @@ -586,7 +586,7 @@ Understanding etcd status
The etcd status is reported when running ``cilium status``. The following line
represents the status of etcd::

KVStore: Ok etcd: 1/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=true: https://192.168.33.11:2379 - 3.4.9 (Leader)
KVStore: Ok etcd: 1/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=true: https://192.168.60.11:2379 - 3.4.9 (Leader)

OK:
The overall status. Either ``OK`` or ``Failure``.
Expand All @@ -606,7 +606,7 @@ has-quorum:
consecutive-errors:
Number of consecutive quorum errors. Only printed if errors are present.

https://192.168.33.11:2379 - 3.4.9 (Leader):
https://192.168.60.11:2379 - 3.4.9 (Leader):
List of all etcd endpoints stating the etcd version and whether the
particular endpoint is currently the elected leader. If an etcd endpoint
cannot be reached, the error is shown.
Expand Down Expand Up @@ -644,7 +644,7 @@ cluster size. The larger the cluster, the longer the `interval
Example of a status with a quorum failure which has not yet reached the
threshold::

KVStore: Ok etcd: 1/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=2m2.778966915s since last heartbeat update has been received, consecutive-errors=1: https://192.168.33.11:2379 - 3.4.9 (Leader)
KVStore: Ok etcd: 1/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=2m2.778966915s since last heartbeat update has been received, consecutive-errors=1: https://192.168.60.11:2379 - 3.4.9 (Leader)

Example of a status with the number of quorum failures exceeding the threshold::

Expand Down Expand Up @@ -842,7 +842,7 @@ State Propagation
},
endpoints: (map[k8s.ServiceID]*k8s.Endpoints) (len=2) {
(k8s.ServiceID) kube-system/kube-dns: (*k8s.Endpoints)(0xc0000103c0)(10.16.127.105:53/TCP,10.16.127.105:53/UDP,10.16.127.105:9153/TCP),
(k8s.ServiceID) default/kubernetes: (*k8s.Endpoints)(0xc0000103f8)(192.168.33.11:6443/TCP)
(k8s.ServiceID) default/kubernetes: (*k8s.Endpoints)(0xc0000103f8)(192.168.60.11:6443/TCP)
},
externalEndpoints: (map[k8s.ServiceID]k8s.externalEndpoints) {
}
Expand Down
10 changes: 5 additions & 5 deletions Documentation/operations/upgrade.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1379,7 +1379,7 @@ Export the current ConfigMap
etcd-config: |-
---
endpoints:
- https://192.168.33.11:2379
- https://192.168.60.11:2379
#
# In case you want to use TLS in etcd, uncomment the 'trusted-ca-file' line
# and create a kubernetes secret by following the tutorial in
Expand Down Expand Up @@ -1440,7 +1440,7 @@ new options while keeping the configuration that we wanted:
etcd-config: |-
---
endpoints:
- https://192.168.33.11:2379
- https://192.168.60.11:2379
#
# In case you want to use TLS in etcd, uncomment the 'trusted-ca-file' line
# and create a kubernetes secret by following the tutorial in
Expand Down Expand Up @@ -1609,13 +1609,13 @@ Example migration

$ kubectl exec -n kube-system cilium-preflight-1234 -- cilium preflight migrate-identity
INFO[0000] Setting up kvstore client
INFO[0000] Connecting to etcd server... config=/var/lib/cilium/etcd-config.yml endpoints="[https://192.168.33.11:2379]" subsys=kvstore
INFO[0000] Connecting to etcd server... config=/var/lib/cilium/etcd-config.yml endpoints="[https://192.168.60.11:2379]" subsys=kvstore
INFO[0000] Setting up kubernetes client
INFO[0000] Establishing connection to apiserver host="https://192.168.33.11:6443" subsys=k8s
INFO[0000] Establishing connection to apiserver host="https://192.168.60.11:6443" subsys=k8s
INFO[0000] Connected to apiserver subsys=k8s
INFO[0000] Got lease ID 29c66c67db8870c8 subsys=kvstore
INFO[0000] Got lock lease ID 29c66c67db8870ca subsys=kvstore
INFO[0000] Successfully verified version of etcd endpoint config=/var/lib/cilium/etcd-config.yml endpoints="[https://192.168.33.11:2379]" etcdEndpoint="https://192.168.33.11:2379" subsys=kvstore version=3.3.13
INFO[0000] Successfully verified version of etcd endpoint config=/var/lib/cilium/etcd-config.yml endpoints="[https://192.168.60.11:2379]" etcdEndpoint="https://192.168.60.11:2379" subsys=kvstore version=3.3.13
INFO[0000] CRD (CustomResourceDefinition) is installed and up-to-date name=CiliumNetworkPolicy/v2 subsys=k8s
INFO[0000] Updating CRD (CustomResourceDefinition)... name=v2.CiliumEndpoint subsys=k8s
INFO[0001] CRD (CustomResourceDefinition) is installed and up-to-date name=v2.CiliumEndpoint subsys=k8s
Expand Down
6 changes: 3 additions & 3 deletions Vagrantfile
Original file line number Diff line number Diff line change
Expand Up @@ -283,9 +283,9 @@ Vagrant.configure(2) do |config|
config.vm.synced_folder cilium_dir, cilium_path, type: "nfs", nfs_udp: false
# Don't forget to enable this ports on your host before starting the VM
# in order to have nfs working
# iptables -I INPUT -p tcp -s 192.168.34.0/24 --dport 111 -j ACCEPT
# iptables -I INPUT -p tcp -s 192.168.34.0/24 --dport 2049 -j ACCEPT
# iptables -I INPUT -p tcp -s 192.168.34.0/24 --dport 20048 -j ACCEPT
# iptables -I INPUT -p tcp -s 192.168.61.0/24 --dport 111 -j ACCEPT
# iptables -I INPUT -p tcp -s 192.168.61.0/24 --dport 2049 -j ACCEPT
# iptables -I INPUT -p tcp -s 192.168.61.0/24 --dport 20048 -j ACCEPT
# if using nftables, in Fedora (with firewalld), use:
# nft -f ./contrib/vagrant/nftables.rules

Expand Down
8 changes: 4 additions & 4 deletions clustermesh-apiserver/tls.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ using an externally accessible service IP from your cluster:

::

192.168.36.11 clustermesh-apiserver.cilium.io
192.168.56.11 clustermesh-apiserver.cilium.io

Manual instructions using openssl
=================================
Expand Down Expand Up @@ -217,7 +217,7 @@ externally accessible service IP from your cluster:

::

192.168.36.11 clustermesh-apiserver.ciliumn.io
192.168.56.11 clustermesh-apiserver.ciliumn.io

Starting Cilium in a Container in a VM
======================================
Expand All @@ -228,10 +228,10 @@ $ docker run -d --name cilium --restart always --privileged --cap-add ALL --log-
--volume /home/vagrant/cilium/etcd:/var/lib/cilium/etcd


/usr/bin/cilium-agent --kvstore etcd --kvstore-opt etcd.config=/var/lib/cilium/etcd/config.yaml --ipv4-node 192.168.36.10 --join-cluster
/usr/bin/cilium-agent --kvstore etcd --kvstore-opt etcd.config=/var/lib/cilium/etcd/config.yaml --ipv4-node 192.168.56.10 --join-cluster
sudo mount bpffs -t bpf /sys/fs/bpf

--add-host clustermesh-apiserver.cilium.io:192.168.36.11
--add-host clustermesh-apiserver.cilium.io:192.168.56.11
--network host
--privileged
--cap-add ALL
Expand Down
6 changes: 3 additions & 3 deletions contrib/vagrant/nftables.rules
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
insert rule inet firewalld filter_IN_public_allow ip saddr 192.168.34.0/24 tcp dport 20048 ct state { 0x8, 0x40 } accept
insert rule inet firewalld filter_IN_public_allow ip saddr 192.168.34.0/24 tcp dport 2049 ct state { 0x8, 0x40 } accept
insert rule inet firewalld filter_IN_public_allow ip saddr 192.168.34.0/24 tcp dport 111 ct state { 0x8, 0x40 } accept
insert rule inet firewalld filter_IN_public_allow ip saddr 192.168.61.0/24 tcp dport 20048 ct state { 0x8, 0x40 } accept
insert rule inet firewalld filter_IN_public_allow ip saddr 192.168.61.0/24 tcp dport 2049 ct state { 0x8, 0x40 } accept
insert rule inet firewalld filter_IN_public_allow ip saddr 192.168.61.0/24 tcp dport 111 ct state { 0x8, 0x40 } accept
2 changes: 1 addition & 1 deletion contrib/vagrant/scripts/helpers.bash
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ if [[ -n "${IPV6_EXT}" ]]; then
# controllers_ips[1] contains the IP without brackets
controllers_ips=( "[${master_ip}]" "${master_ip}" )
else
master_ip=${MASTER_IPV4:-"192.168.33.11"}
master_ip=${MASTER_IPV4:-"192.168.60.11"}
controllers_ips=( "${master_ip}" "${master_ip}" )
fi

Expand Down
10 changes: 5 additions & 5 deletions contrib/vagrant/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,30 +8,30 @@ chmod a+x "$dir/restart.sh"

# Master's IPv4 address. Workers' IPv4 address will have their IP incremented by
# 1. The netmask used will be /24
export 'MASTER_IPV4'=${MASTER_IPV4:-"192.168.33.11"}
export 'MASTER_IPV4'=${MASTER_IPV4:-"192.168.60.11"}
# NFS address is only set if NFS option is active. This will create a new
# network interface for each VM with starting on this IP. This IP will be
# available to reach from the host.
export 'MASTER_IPV4_NFS'=${MASTER_IPV4_NFS:-"192.168.34.11"}
export 'MASTER_IPV4_NFS'=${MASTER_IPV4_NFS:-"192.168.61.11"}
# Enable IPv4 mode. It's enabled by default since it's required for several
# runtime tests.
export 'IPV4'=${IPV4:-1}
# Exposed IPv6 node CIDR, only set if IPV4 is disabled. Each node will be setup
# with a IPv6 network available from the host with $IPV6_PUBLIC_CIDR +
# 6to4($MASTER_IPV4). For IPv4 "192.168.33.11" we will have for example:
# 6to4($MASTER_IPV4). For IPv4 "192.168.60.11" we will have for example:
# master : FD00::B/16
# worker 1: FD00::C/16
# The netmask used will be /16
export 'IPV6_PUBLIC_CIDR'=${IPV4+"FD00::"}
# Internal IPv6 node CIDR, always set up by default. Each node will be setup
# with a IPv6 network available from the host with IPV6_INTERNAL_CIDR +
# 6to4($MASTER_IPV4). For IPv4 "192.168.33.11" we will have for example:
# 6to4($MASTER_IPV4). For IPv4 "192.168.60.11" we will have for example:
# master : FD01::B/16
# worker 1: FD01::C/16
# The netmask used will be /16
export 'IPV6_INTERNAL_CIDR'=${IPV4+"FD01::"}
# Cilium IPv6 node CIDR. Each node will be setup with IPv6 network of
# $CILIUM_IPV6_NODE_CIDR + 6to4($MASTER_IPV4). For IPv4 "192.168.33.11" we will
# $CILIUM_IPV6_NODE_CIDR + 6to4($MASTER_IPV4). For IPv4 "192.168.60.11" we will
# have for example:
# master : FD02::0:0:0/96
# worker 1: FD02::1:0:0/96
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ spec:
privileged: true
env:
- name: EGRESS_IPS
value: "192.168.33.100/24 192.168.33.101/24"
value: "192.168.60.100/24 192.168.60.101/24"
args:
- "for i in $EGRESS_IPS; do ip address add $i dev enp0s8; done; sleep 10000000"
lifecycle:
Expand Down