Skip to content

stolostron/cluster-proxy

 
 

Cluster Proxy

License Go

What is Cluster Proxy?

Cluster Proxy is a pluggable addon working on OCM rebased on the extensibility provided by addon-framework which automates the installation of apiserver-network-proxy on both hub cluster and managed clusters. The network proxy will be establishing reverse proxy tunnels from the managed cluster to the hub cluster to make the clients from the hub network can access the services in the managed clusters' network even if all the clusters are isolated in different VPCs.

Cluster Proxy consists of two components:

  • Addon-Manager: Manages the installation of proxy-servers i.e. proxy ingress in the hub cluster.

  • Addon-Agent: Manages the installation of proxy-agents for each managed clusters.

The overall architecture is shown below:

Arch

Getting started

Prerequisite

  • OCM registration (>= 0.5.0)

Steps

Installing via Helm Chart

  1. Adding helm repo:
$ helm repo add ocm https://openclustermanagement.blob.core.windows.net/releases/
$ helm repo update
$ helm search repo ocm/cluster-proxy
NAME                       	CHART VERSION	APP VERSION	DESCRIPTION                   
ocm/cluster-proxy          	<..>       	    1.0.0      	A Helm chart for Cluster-Proxy
  1. Install the helm chart:
$ helm install \
    -n open-cluster-management-addon --create-namespace \
    cluster-proxy ocm/cluster-proxy 
$ kubectl -n open-cluster-management-cluster-proxy get pod
NAME                                           READY   STATUS        RESTARTS   AGE
cluster-proxy-5d8db7ddf4-265tm                 1/1     Running       0          12s
cluster-proxy-addon-manager-778f6d679f-9pndv   1/1     Running       0          33s
...
  1. The addon will be automatically installed to your registered clusters, verify the addon installation:
$ kubectl get managedclusteraddon -A | grep cluster-proxy
NAMESPACE         NAME                     AVAILABLE   DEGRADED   PROGRESSING
<your cluster>    cluster-proxy            True                   

Usage

By default, the proxy servers are running in GPRC mode so the proxy clients are expected to proxy through the tunnels by the konnectivity-client. Konnectivity is the underlying technique of Kubernetes' egress-selector feature and an example of konnectivity client is visible here.

Codewisely proxying to the managed cluster will be simply overriding the dialer of the kubernetes original client config object, e.g.:

  // instantiate a gprc proxy dialer
  tunnel, err := konnectivity.CreateSingleUseGrpcTunnel(
      context.TODO(),
      <proxy service>,
      grpc.WithTransportCredentials(grpccredentials.NewTLS(proxyTLSCfg)),
  )
  cfg, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
  if err != nil {
  	return err
  }
  // The managed cluster's name.
  cfg.Host = clusterName
  // Override the default tcp dialer
  cfg.Dial = tunnel.DialContext 

Performance

Here's the result of network bandwidth benchmarking via goben with or without Cluster-Proxy (i.e. Apiserver-Network-Proxy) so roughly the proxying through the tunnel will involve 1/2 performance loss so it's recommended to avoid transferring data-intensive traffic over the proxy.

Bandwidth Direct over Cluster-Proxy
Read/Mbps 902 Mbps 461 Mbps
Write/Mbps 889 Mbps 428 Mbps

References

About

An OCM addon that automates the installation of Kubernetes' konnectivity servers and agents.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 96.8%
  • Makefile 2.2%
  • Dockerfile 1.0%