Get a Kubernetes LoadBalancer where you never thought it was possible.
In cloud-based Kubernetes solutions, Services can be exposed as type "LoadBalancer" and your cloud provider will provision a LoadBalancer and start routing traffic, in another word: you get ingress to your service.
inlets-operator brings that same experience to your local Kubernetes or k3s cluster (k3s/k3d/minikube/microk8s/Docker Desktop/KinD). The operator automates the creation of an inlets exit-node on public cloud, and runs the client as a Pod inside your cluster. Your Kubernetes
Service will be updated with the public IP of the exit-node and you can start receiving incoming traffic immediately.
Who is this for?
This solution is for users who want to gain incoming network access (ingress) to their private Kubernetes clusters running on their laptops, VMs, within a Docker container, on-premises, or behind NAT. The cost of the LoadBalancer with a IaaS like DigitalOcean is around 5 USD / mo, which is 10 USD cheaper than an AWS ELB or GCP LoadBalancer.
Whilst 5 USD is cheaper than a "Cloud Load Balancer", this tool is for users who cannot get incoming connections due to their network configuration, not for saving money vs. public cloud.
Status and backlog
This version of the inlets-operator is a early proof-of-concept, but it builds upon inlets, which is stable and widely used.
- Provision VMs/exit-nodes on public cloud
- Provision to Packet.com
- Provision to DigitalOcean
- Provision to Scaleway
- Automatically update Service type LoadBalancer with a public IP
- Tunnel L7
- In-cluster Role, Dockerfile and YAML files
- Raspberry Pi / armhf build and YAML file
- ARM64 (Graviton/Odroid/Packet.com) Dockerfile/build and K8s YAML files
- Ignore Services with
- Garbage collect hosts when Service or CRD is deleted
- CI with Travis (use openfaas-incubator/openfaas-operator as a sample)
inlets-profor TCP traffic
wss://for control-port using self-signed certs or LetsEncrypt and nip.io
- Move control-port and
/tunnelendpoint to high port i.e.
31111and make it configurable in the helm chart
- Provision to AWS EC2
- Provision to GCP
- Provision to Civo
Inlets tunnels HTTP traffic at L7, so the inlets-operator can be used to tunnel HTTP traffic. A new project I'm working on called inlets-pro tunnels any TCP traffic at L4 i.e. Mongo, Redis, NATS, SSH, TLS, whatever you like.
Inlets is listed on the Cloud Native Landscape as a Service Proxy
- inlets - open-source L7 HTTP tunnel and reverse proxy
- inlets-pro - L4 TCP load-balancer
- inlets-operator - deep integration for inlets in Kubernetes, expose Service type LoadBalancer
- inletsctl - CLI tool to provision exit-nodes for use with inlets or inlets-pro
inletsis made available free-of-charge, but you can support its ongoing development through GitHub Sponsors
This video demo shows a single-node VM running on k3s on Packet.com, and the inlets exit node also being provisioned on Packet's infrastructure.
See an alternative video showing my cluster running with KinD on my Mac and the exit node being provisioned on DigitalOcean:
Running in-cluster, using DigitalOcean for the exit node
Note: this example is now multi-arch, so it's valid for
You can also run the operator in-cluster, a ClusterRole is used since Services can be created in any namespace, and may need a tunnel.
# Create a secret to store the access token kubectl create secret generic inlets-access-key \ --from-literal inlets-access-key="$(cat ~/Downloads/do-access-token)" kubectl apply -f ./artifacts/crd.yaml # Apply the operator deployment and RBAC role kubectl apply -f ./artifacts/operator-rbac.yaml kubectl apply -f ./artifacts/operator.yaml
Running on a Raspberry Pi
Use the same commands as described in the section above.
There used to be separate deployment files in
operator-armhf.yaml. Since version
0.2.7Docker images get built for multiple architectures with the same tag which means that there is now just one deployment file called
operator.yamlthat can be used on all supported architecures.
Get a LoadBalancer provided by inlets
kubectl run nginx-1 --image=nginx --port=80 --restart=Always kubectl run nginx-2 --image=nginx --port=80 --restart=Always kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer kubectl expose deployment nginx-2 --port=80 --type=LoadBalancer kubectl get svc kubectl get tunnel/nginx-1-tunnel -o yaml kubectl logs deploy/nginx-1-tunnel-client
Check the IP of the LoadBalancer and then access it via the Internet.
Example with OpenFaaS, make sure you give the
http, otherwise a default of
80 will be used incorrectly.
apiVersion: v1 kind: Service metadata: name: gateway namespace: openfaas labels: app: gateway spec: ports: - name: http port: 8080 protocol: TCP targetPort: 8080 nodePort: 31112 selector: app: gateway type: LoadBalancer
By default the operator will create a tunnel for every loadbalancer service.
To ignore a service such as
traefik type in:
kubectl annotate svc/traefik -n kube-system dev.inlets.manage=false
You can also set the operator to ignore the services by default and only manage them when the annotation is true.
To do this, run the operator with the flag
The operator deployment is in the
kubectl logs deploy/inlets-operator -n kube-system -f
|Provider||Price per month||Price per hour||OS image||CPU||Memory||Boot time|
|Digital Ocean||$5||~$0.0068||Ubuntu 16.04||1||512MB||~20-30s|
Contributions are welcome, see the CONTRIBUTING.md guide.
Similar projects / products and alternatives
- metallb - open source LoadBalancer for private Kubernetes clusters, no tunnelling.
- inlets - inlets provides an L7 HTTP tunnel for applications through the use of an exit node, it is used by the inlets operator
- inlets pro - L4 TCP tunnel, which can tunnel any TCP traffic and is on the roadmap for the inlets-operator
- Cloudflare Argo - paid SaaS product from Cloudflare for Cloudflare customers and domains - K8s integration available through Ingress
- ngrok - a popular tunnelling tool, restarts every 7 hours, limits connections per minute, paid SaaS product with no K8s integration available