This is a prototype deployment of an I2P test network using i2pd on k3s. To properly reseed the routers, calico with static IPs is used as CNI.
- GNU/Linux system for our k3s cluster
- 4 Cores and 8 GB RAM or more are recommended (depending on how many routers you want to deploy)
- system needs internet connectivity for the initial setup and for Kubernetes to pull the i2pd container images
- I do not recommend to expose this node directly to the internet!
- For further information see the k3s requirements: https://docs.k3s.io/installation/requirements
- Command Line tools
- Install
kubectl
(https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) - Install
helm
(https://helm.sh/docs/intro/install/) - Install
calicoctl
(https://docs.tigera.io/calico/latest/operations/calicoctl/install)
For an up-to-date version check the calico docs: https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/quickstart
# k3s install without default flannel
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--flannel-backend=none --cluster-cidr=10.4.0.0/16 --disable-network-policy --disable=traefik" sh -
# get kube config for kubectl
mkdir ~/.kube
sudo k3s kubectl config view --raw | tee ~/.kube/config
chmod 600 ~/.kube/config
export KUBECONFIG=~/.kube/config
echo "export KUBECONFIG=~/.kube/config" >> .bashrc
# show k3s nodes
# you should see one k3s node with the status "READY"
kubectl get nodes -o wide
# install calico
#
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml
# wait until all pods are ready
watch kubectl get pods --all-namespaces
To assign static IPs to our routers we need to create a calico IPPool
.
In this case we use a nodeSelector
that matches nothing per default and we have to manually assign the IPs using a podAnnotation
. (see setup.sh)
Config:
# test-pool.yaml
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: test-pool
spec:
cidr: 10.8.0.0/16
natOutgoing: true
disabled: false
nodeSelector: "!all()"
Apply the config:
# apply new pool
calicoctl apply -f test-pool.yaml
# get pools
calicoctl get ippools
The setup consists of three steps.
- At first we deploy our i2pd routers without reesed information via
helm install
. During the install we have to set a podAnnotation for calico to assign a static IP address to the i2pd pods. - Once all pods have been started and are ready, we can copy the newly generated router.info files from each of the pods, zip them and save them as
seed.zip
in the local directory. - After the zipfile has been generated we need to kill all containers and upgrade the deployment via
helm upgrade
. Theseed.zip
is automatically mounted via a configmap to all pods.
See setup.sh
cd helm/i2pd-chart
./setup.sh
i2pd.conf is inside the helm values.
...
config: |
log = stdout
loglevel = debug
...
See the helm values.yaml here
The relative image name is configured inside the helm values. K3s uses dockerhub per default for relative image names.
...
image:
repository: purplei2p/i2pd
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: latest
...
See the helm values.yaml here
You can do a simple tcpdump
on the k3s node.
k3s-node$ sudo tcpdump -nnni any net "10.8.0.0/16" -w traffic.pcap
You can change the network latency and packet loss in the values.yaml.
The tc
command is run in every pod before the i2pd container is started.
trafficControl:
enabled: true
image:
# see https://github.com/h-phil/alpine-iproute2
repository: hphil/alpine-iproute2
tag: latest
init: |
#!/bin/sh
set -ex
# delay of 40+-20ms (normal distribution) per pod
# 0.1% loss with higher successive probablity (packet burst lossess)
tc qdisc add dev eth0 root netem delay 40ms 20ms distribution normal loss 0.1% 25%
You can disable this if you set enabled
to false
.
Some other test networks that use a similar concept but didn't work for me: