TCP tunneling with Skupper
- Overview
- Prerequisites
- Step 1: Set up the demo
- Step 2: Deploy the Virtual Application Network
- Step 3: Access the public service remotely
- What just happened?
- Cleaning up
- Next steps
This is a simple demonstration of TCP communication tunneled through a Skupper network from a private to a public cluster and back again. During development of this demonstration, the private cluster was running locally, while the public cluster was on AWS.
We will set up a Skupper network between the two clusters, start a TCP echo-server on the public cluster, then communicate to it from the private cluster and receive its replies. At no time is any port opened on the machine running the private cluster.
- The
kubectl
command-line tool, version 1.15 or later (installation guide) - The
skupper
command-line tool, the latest version (installation guide) - Two Kubernetes clusters, from any providers you choose. ( In this example, the clusters are called 'public' and 'private'. )
- On your local machine, clone the example repo:
cd # Go to your ${HOME} directory. git clone https://github.com/skupperproject/skupper-example-tcp-echo
- Still on your local machine, open two separate terminal sessions: one for the 'public' and one for the 'private' cluster.
- Set Up the Public Cluster
- Go to your public session terminal window.
- Make your public kubeconfig file.
export KUBECONFIG=/tmp/public-kubeconfig
3. Log in to this public cluster.
4. Create 'public' namespace and go into it.
kubectl create namespace public
kubectl config set-context --current --namespace public 5. Deploy the tcp-echo service.kubectl apply -f ${HOME}/skupper-example-tcp-echo/public-deployment.yaml
6. Start Skupper, and expose the service.
skupper init skupper expose --port 9090 deployment tcp-go-echo
7. Make the token that lets other sites connect to this one.
skupper token create /tmp/public-secret.yaml
- Set Up the Private Cluster
- Go to your 'private' session terminal window.
- Make a kubeconfig file for this cluster.
export KUBECONFIG=/tmp/private-kubeconfig
3. Log in to the private cluster
4. Create 'private' namespace and go into it.
kubectl create namespace private kubectl config set-context --current --namespace private
5. Start Skupper and link to the public cluster.
skupper init skupper link create /tmp/public-secret.yaml
6. Check the link status.
skupper link status
7. You should see a message like this:
Link link1 is active
8. If the link status is not yet active, wait a few seconds and try again.
- Go to your private cluster terminal window.
- Find the tcp-go-echo service.
kubectl get svc
- Among the responses, see something like this:
tcp-go-echo 172.21.33.62 9090/TCP
- Forward your local machine's port 9090 to the tcp-go-echo service:
kubectl port-forward service/tcp-go-echo 9090:9090
- Start a third terminal session, and go into it.
- Start a telnet session to the forwarded port:
telnet 0.0.0.0 9090
- Type some text, hit enter, and see it returned to you in all caps.
hello, Skupper tcp-go-echo-7ddbc7756c-wxgcq : HELLO, SKUPPER
Both of these clusters are, in fact, public, but this would work the same way if one of them were actually private.
The TCP echo server was deployed and running on our 'public' cluster. The use of Skupper on that cluster allowed us to generate a connection token, which we then used to securely connect to that cluster from the private one. Please note that, since the connection was initiated by the Skupper instance on the private cluster, no ports on the private cluster were ever opened!
Because we told Skupper to expose the TCP Echo service, the two Skupper instances communicated with each other and the private instance learned about the TCP Echo service. The Skupper instance on the private cluster then made a forwarder to that service available on its cluster.
We then forwarded port 9090 on our local machine to the same port on the private cluster. When we started a telnet session to that port, the two instances of Skupper handled all communication, allowing us to transparently access a service running on a public cluster from the security of our private cluster.
And all Skupper traffic between the two clusters was secured with mutual TLS.
Delete the pod and the virtual application network that were created in the demonstration.
- In the terminal for the public cluster, get the pod id, delete it, and delete Skupper :
kubectl get pods kubectl delete pod tcp-go-echo-<TCP-GO-ECHO-POD-ID> skupper delete
- In the terminal for the private cluster, delete Skupper :
skupper delete