This repo is an example of how to use the above metioned stack to retrieve a source IP in an application. This will be surfaced and an X-Forward-For
header which is commonly used by applications.
The core products/features used here are:
- envoy filters and istio(TSM)
- proxy-protocol
- AVI load balancer
- AKO
- TKGs
- Flux Helm controller(TMC Provided)
- OPA Gatekeeper mutations(TMC provided)
- TKGs deployed
- TKGs workload cluster deployed
- Flux helm controller installed in the cluster
- AVI installed and working with TKGs
- TSM subscription
- Cluster managed or attached by/to TMC
We need to create a new AVI app profile. This is so we can enable the proxy-protocol settings to be used by our virtual service.
- Navigate to Template > Profiles.
- Create a new profile named
L4-app-proxy-protocol
. - For Type, select L4.
- Click Enable PROXY Protocol.
- Select version 1.
- Save the profile
When using TKGs there is already load balancing provided by AVI/AKO. However for the functionality that we are using we need to override the supervisor provided AKO and use an in cluster AKO. The short explanation of this is that in TKGs the AKO pods runs on the supervisor cluster and layer 4 service type LoadBalancer
is handled by this AKO instance. The challenge is that this install of AKO is not able to handle certain features like AKO CRDs. For this reaosn we need to deploy AKO in cluster using helm, by doing this we can offload the servicing of load balancers to the in cluster AKO and get access to all features. Until AKO 1.12 you will need to make sure that any loadbalancers created use the spec.loadBalancerClass: ako.vmware.com/avi-lb
this will ensure there are no duplciate LBs created.
For this install we are using a vsphere 8U2 environment with NSX-T and AVI. This same setup can be done if you are not using NSX-T but the values file for helm will be different.
- Update the values file in helmrelease.yml
nsxtT1LR
- this should be the T1 associated with your supervsior namespace where the cluster is provsioned. This is automatically created by TKGs.vipNetworkList[0].networkName
- this is the ingress network. It is automatically created by TKGs.vipNetworkList[0].cidr
- cidr for the above network. These can be found in the avi network list in the console.ControllerSettings.serviceEngineGroupName
- auto created service engine group name in the NSX-T cloud.AKOSettings.clusterName
- name of the cluster.ControllerSettings.cloudName
- name of the NSX-T cloud.ControllerSettings.controllerHost
- the ip of fqdn of the AVI controller.avicredentials
- your avi credentials.
- In the context of the workload cluster run the following
kubectl apply -f helm-ako/helmrelease.yml
- validate the ako pod is running and there are no errors.
In this case we are using TSM to provide Istio into our clusters. Istio also has fucntionality that allows us to inspect the proxy-protocol TCP headers and inject them into http headers. If this is not in place the app needs to know how to do this. Luckily this can be done with an envoyFilter
in Istio.
This is a CRD that is provided by AKO. This is needed to be able to tell our istio ingress gateway to use the new profile we created that has proxy-protocol enabled.
kubectl apply -f helm-ako/l4rule.yml
We need to use some mutations to add some settings to the istio install provided by TSM. We are using the Gatekeeper mutation policy provdied through TMC to do this. You can find the policies here.
Here is what the mutations are doing:
add-lb-class-istio
- this adds the above mentionedloadBalancerClass
setting to the istio ingress gateway. this allows our in cluster AKO to manage the LB.add-l4rule
- this adds the annotation to the istio ingress gateway loadbalancer so that it uses our new app profile. It does this by refering the L4rule CRD created previously.add-gateway-config
- this add an istio gateway config to ingress gateway to trust multiple upstream proxies, AKA AVI.
kubectl apply -f mutate/istio-mutations.yml
You can either use TMC or TSM directly. The docs can be found here for TMC and here for TSM. When onboarding set the namespace inclusion to "Cluster admin owned"
At this point you should see istio components in the cluster. You can validate all of the mutations were sucessfull by looking at the ingress gateway service object and pods. In AVI you should see that the Virtual service is using our custom profile.
After the cluster has been onboarded create a namespace in the cluster that will be used by TSM.
kubectl apply -f httpbin/namespace.yml
Finally add this namespace to a global namespace in TSM. docs on creating the GNS can be found here.
In order to enable proxy-protocol in istio we need an envoy filter since TSM deploys istio 1.18.5
in 1.20
this can be done with a gateway annotation. You can see what we are doing in the istio docs here.
kubectl apply -f envoy/proxy-proto-filter.yml
Deploying the httpbin app will allow you to see the headers passed through. You will want to update the FQDN in the httpbin/istio.yml
to match something for your enviroment and DNS domains.
kubectl apply -f httpbin/deploy.yml
kubectl apply -f httpbin/istio.yml
curl <your-vs-fqdn>/get?show_env=true
The output should have something like this, and the IP should be the client IP instead of the service engine IP.
"X-Forwarded-For": "10.16.119.109",