Organizations often use IP allowlisting for enhanced security with their cloud applications. Many cloud applications limit access based on the source IP. To communicate with these services, your traffic must come from a pre-registered IP. Otherwise, access is denied.
With Calico Egress Gateways for AWS, you can dictate the public source IP for Kubernetes workload traffic bound for the internet. This is achieved by associating AWS Elastic IP addresses with Kubernetes namespaces and pods. Calico manages this in the background, ensuring that egress traffic from these workloads always uses one of the known public IPs.
Calico Egress Gateways guarantee that traffic from your Kubernetes setup (like an Elastic Kubernetes Service cluster) to external platforms consistently originates from certain IP addresses. This is essential when dealing with services that use IP allowlisting for security.
This guide will show you how to integrate Calico's Custom Resources with AWS Elastic IP addresses using an Infrastructure as Code (IaC) pattern. This integration will automate public IP allocation for Kubernetes workloads connecting to approved SaaS platforms.
We'll use Terraform, an infrastructure-as-code tool, to deploy this reference architecture automatically. We'll walk you through the deployment process and then demonstrate how to utilize Egress Gateways with Calico on AWS.
First, ensure that you have installed the following tools locally.
Make sure you have completed the prerequisites and then clone the Terraform blueprint:
git clone git@github.com:tigera-solutions/calico-egress-gateway-public-ip-anchoring.git
Switch to the aws
subdirectory:
cd aws
Optional: Edit the variables.tf file to customize the configuration.
Initialize and apply the Terraform configurations:
terraform init
terraform apply
Update your kubeconfig with the EKS cluster credentials as indicated in the Terraform output:
aws eks --region us-east-1 update-kubeconfig --name demo --alias demo
Check the status of Calico in your EKS cluster:
kubectl get tigerastatus
Join your EKS cluster to Calico Cloud as illustrated:
join-eks-to-calico-cloud.mp4
Check the cluster status:
kubectl get tigerastatus
Enable egress IP support, AWS secondary IP support, and set the flow logs flush interval:
kubectl patch felixconfiguration default --type='merge' -p '{
"spec": {
"egressIPSupport": "EnabledPerNamespaceOrPerPod",
"awsSecondaryIPSupport": "Enabled",
"dnsLogsFlushInterval": "15s",
"l7LogsFlushInterval": "15s",
"flowLogsFlushInterval": "15s",
"flowLogsFileAggregationKindForAllowed": 1
}
}'
Change to the k8s
directory:
cd ..
cd k8s
Examine the variables.tf file for Calico Egress Gateways settings.
Use Terraform to create six egress gateways, ensuring that each gateway has only one replica:
terraform init
terraform apply --auto-approve \
--var egress_gateway_replica_count="1" \
--var egress_gateway_count="6"
Return to the project root and apply the manifests:
cd ..
kubectl apply -f manifests
Test the configuration of each Egress Gateway:
for egw in {1..3}; do
echo -n "Elastic IPs for EGW-${egw}: "
kubectl get egressgateway egw-${egw} -o json | jq -r '.spec.aws.elasticIPs[]'
kubectl annotate pod netshoot --overwrite egress.projectcalico.org/selector="egress-gateway == 'egw-${egw}'"
echo -n "Public IP for netshoot pod via EGW-${egw}: "
kubectl exec -it netshoot -- curl ifconfig.me
echo
echo "----------"
echo
done
We connect the netshoot pod to each egress gateway, one after another. After each connection, we use a website called ifconfig.me
to see what IP address it recognizes as coming from us. This website acts like a mirror, showing us the IP address we're using. By doing this, we can clearly see that we have control over setting the public IP address for the outgoing traffic from a specific pod or namespace.
Use the Calico Cloud Dynamic Service Graph to monitor the traffic originating from your workloads as it passes through the egress gateways and travels over the internet to reach ifconfig.me
.
source-based-routing.mp4
Apply the Egress Gateway policy, and then annotate the default namespace to utilize this policy.
kubectl apply -f manifests/egw-policy.yaml
kubectl annotate ns default egress.projectcalico.org/egressGatewayPolicy="egress-gateway-policy"
Check the egw-policy.yaml file to understand how traffic is directed.
Test egress traffic from the netshoot pod:
kubectl exec -it netshoot -- ping -c 2 8.8.8.8
kubectl exec -it netshoot -- ping -c 2 4.2.2.2
kubectl exec -it netshoot -- ping -c 2 8.8.4.4
Traffic sent to 8.8.8.8 will be routed through egw-5
, while traffic to 4.2.2.2 will go through egw-6
. Any traffic to destinations other than these will default to using egw-4
. This setup showcases our ability to manage the public source IP address for Kubernetes egress traffic from specific namespaces and pods, controlled through policy settings.
Use the Calico Cloud Dynamic Service Graph to observe and track the traffic flow from your workloads as it is routed through the egress gateways according to the defined policy.
policy-based-routing.mp4
Reset the configurations and repeat the process if needed:
kubectl annotate ns default egress.projectcalico.org/egressGatewayPolicy-
kubectl annotate pod netshoot egress.projectcalico.org/selector-
To teardown and remove the resources created in this example:
cd aws
terraform state rm helm_release.calico
terraform destroy --auto-approve