Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support access not only kube-apiserver but also other services in the managed cluster #53

Closed
xuezhaojun opened this issue Jan 19, 2022 · 25 comments
Assignees

Comments

@xuezhaojun
Copy link
Member

Is it possible for cluster-proxy to support the above feature?

Currently, proxy-agent would direct all traffic into kube-apiserver through this ExternalName type service:

func newClusterService(namespace, name string) *corev1.Service {
const nativeKubernetesInClusterService = "kubernetes.default.svc.cluster.local"
return &corev1.Service{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
Kind: "Service",
},
ObjectMeta: metav1.ObjectMeta{
Namespace: namespace,
Name: name,
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeExternalName,
ExternalName: nativeKubernetesInClusterService,
},
}
}

But if we do the following:

  1. Add the target service' hostname in proxy-agent "agent-identifiers" flag. It can make sure requests will go to the desired managed cluster through the tunnel.
  2. Add another ExternalName service to point to the target service.

Therefore another target service on the managed cluster could be accessed by the user on the hub as well.

@showeimer
Copy link

Hi @yue994488! Would you be able to evaluate the technical feasability of this enhancement request, use case, and how much work it may be?

@xuezhaojun
Copy link
Member Author

xuezhaojun commented Mar 1, 2022

I have a draft idea about this, we can add a "map" struct services in managedproxyconfigurations:

apiVersion: proxy.open-cluster-management.io/v1alpha1
kind: ManagedProxyConfiguration
metadata:
  name: cluster-proxy
spec:
  services: # Here!! This is what we need to add.
    search: search.namespace.svc
    ui: ui.namespace.svc
  authentication:
    dump:
      secrets: {}
    signer:
      type: SelfSigned
  proxyServer:
    image: quay.io/open-cluster-managment.io/cluster-proxy:latest
    ...

The key represents the service we want to access, and the value is the respective service address.

The agent will generate several ExternalName Services based on this map structure.

And also register on proxy-agent with more agent-identifiers:

  • cluster1-search
  • cluster1-ui

And user can use cluster1-search as the hostname of the request to access the search service on the agent(managed cluster).

  • For example: https://cluster-search/user/xuezhao would send the request to search service on cluster1 and find user xuezhao's information.

Is this making sense? Please share more advice, thanks! @qiujian16 @yue9944882

@qiujian16
Copy link
Member

FYI @yue9944882 this is case that we discussed today.

@qiujian16
Copy link
Member

@xuezhaojun I do not think it could be a cluster scoped configuration. I wonder this should be a per cluster or per clusterset configuration. Also it is not quite formal to use cluster1-search as the hostname. It probably could be search.default.cluster.cluster1

@xuezhaojun
Copy link
Member Author

Agree

@yue9944882
Copy link
Member

in the current implementation, we have an implicit hostname resolver from <cluster name> to "kubernetes.default" in each cluster which is working by applying an ExternalName type service. it makes sense to allow more extended resolvers in our addon. how about we add the following extensions to the configuration:

apiVersion: proxy.open-cluster-management.io/v1alpha1
kind: ManagedProxyConfiguration
metadata:
  name: cluster-proxy
spec:
  serviceResolvers:
  - hostnameTemplate: "$cluster"
     namespace: default
     name: kubernetes
  - hostnameTemplate: "mySvc.$cluster"
     namespace: default
     name: my-svc
    ...

@xuezhaojun
Copy link
Member Author

in the current implementation, we have an implicit hostname resolver from <cluster name> to "kubernetes.default" in each cluster which is working by applying an ExternalName type service. it makes sense to allow more extended resolvers in our addon. how about we add the following extensions to the configuration:

apiVersion: proxy.open-cluster-management.io/v1alpha1
kind: ManagedProxyConfiguration
metadata:
  name: cluster-proxy
spec:
  serviceResolvers:
  - hostnameTemplate: "$cluster"
     namespace: default
     name: kubernetes
  - hostnameTemplate: "mySvc.$cluster"
     namespace: default
     name: my-svc
    ...

Should I implement a prototype based on this design?

@yue9944882
Copy link
Member

@xuezhaojun that would be really helpful, thanks!

@xuezhaojun
Copy link
Member Author

/assign @xuezhaojun

@xuezhaojun
Copy link
Member Author

xuezhaojun commented Mar 9, 2022

@xuezhaojun I do not think it could be a cluster scoped configuration. I wonder this should be a per cluster or per clusterset configuration. Also it is not quite formal to use cluster1-search as the hostname. It probably could be search.default.cluster.cluster1

@qiujian16 I'm not sure about if should be "per cluster" configuration.

Say we have 2 manged clusters: clusterA and clusterB, only clusterA have serviceA and we want to access this service A.
If the configuration is per cluster level, the externelName-serviceA should only deploy on clusterA.
If the configuration is cluster-scopted level, the externalName-serviceA would be deployed on both clusterA and clusterB.

But it does no harm if clusterB has an externalName-serviceA on it. And per-cluster configuration adds some complexity to design.

@qiujian16
Copy link
Member

The issue is who should be able to set what service to access on spoke. Let's say a user who only can access clusterA and the user wants to set a service on proxy, he/she cannot set this because it needs the permission to update this cluster-scoped configuration.

@xuezhaojun
Copy link
Member Author

The issue is who should be able to set what service to access on spoke. Let's say a user who only can access clusterA and the user wants to set a service on proxy, he/she cannot set this because it needs the permission to update this cluster-scoped configuration.

I see, that means we assuming user who can access clusterA also have the permission to "enable" other services on clusterA accessible via cluster-proxy.

@xuezhaojun
Copy link
Member Author

xuezhaojun commented Mar 9, 2022

In other words, it also make sense if it is cluster-scoped. Because the proxy-server itself is cluster-scoped. Even if user only set serviceA on clusterA, but this still is a change to every user. Although they can not pass the auth check, still they can send requests to this new service host.
So if user of clusterA want to access serviceA, he needs ask for admin permission to do so.

@xuezhaojun
Copy link
Member Author

And another issue is, user who want to access ServiceA on clusterA, may not need to access kube-apiserver on ClusterA.

@xuezhaojun
Copy link
Member Author

xuezhaojun commented Mar 9, 2022

@xuezhaojun I do not think it could be a cluster scoped configuration. I wonder this should be a per cluster or per clusterset configuration. Also it is not quite formal to use cluster1-search as the hostname. It probably could be search.default.cluster.cluster1

I found another issue: if the hostname be search.default.cluster.cluster1, then this host must represent an ExternalName service in cluster1.

But we can't have dot in Service name, So we can't set hostname to search.default.cluster.cluster1.

@qiujian16
Copy link
Member

And another issue is, user who want to access ServiceA on clusterA, may not need to access kube-apiserver on ClusterA.

I think it is fine. We do not have a fine grained rbac for different services on spoke. A user is supposed to be the admin of clusterA.

@xuezhaojun
Copy link
Member Author

And another issue is, user who want to access ServiceA on clusterA, may not need to access kube-apiserver on ClusterA.

I got an idea to deal with this.

  1. Create a namespace called "cluster1" on the managed cluster "cluster1".
  2. Create a ExternalName Service "search" (for example) in cluster1 namespace. And the externalName field of this Service is point to search.default.svc (say the search server is deployed in the default namespace)
  3. Now the user could use hostname search.cluster1 to access this service.

@qiujian16
Copy link
Member

what if you have two service with the same name but in different ns on spoke?

@xuezhaojun
Copy link
Member Author

what if you have two service with the same name but in different ns on spoke?

That would require a map with customized names, such as search1 for search in namespace1, search2 for search in namespace2.

@xuezhaojun
Copy link
Member Author

xuezhaojun commented Mar 25, 2022

Currently, we are using the ExternalName to map cluster name to kubernetes.default.svc.

But with this mode, we can not forward a hostname like serviceA.namespceA.clusterA to the target service we want.

Recently, I tried to use coredns to replace the ExternalName service with the hostname mapping work.

To do this, we need to run a coredns server as sidebar in the same pod of the proxy-agent: #89


Two defects of this approach are:

  1. We need to run as root to expose DNS on port 53, it could cause security issues on some platforms.
  2. We need to provide another coredns image.

@xuezhaojun
Copy link
Member Author

xuezhaojun commented Mar 31, 2022

Here is another approach @qiujian16

1

First we add spec fields in managedproxyconfiguration to specify the services we want to access:

apiVersion: proxy.open-cluster-management.io/v1alpha1
kind: ManagedProxyConfiguration
metadata:
  name: cluster-proxy
spec:
  authentication:
    dump:
      secrets: {}
    signer:
      type: SelfSigned
  services: # ------- !!! This is what we add ---------
  - name: search
    namespace: default
    cluster: cluster1 # represent the managed cluster's name
  - name: search
    namespace: search
    cluster: cluster1
  - name: foo
    namespace: default
    cluster: cluster2
  proxyServer:
  ...

2

The controller will create a new of resources owned by managedproxyconfiguration named clusterproxyentrypoint in every managed cluster we specify in services fied.

For example, if we run command:

kubectl get clusterproxyentrypoint -n cluster1

we get:

NAME                              AGE        SERVICE       NAMESPACE  MANAGEDCLUSTER      URL
search.search.cluster1             1h        search          search        cluster1       https://<uid>/
search.default.cluster1            1h        search          default       cluster1       https://<uid>/

if we run:

kubectl get clusterproxyentrypoint -n cluster2

we get:

NAME                           AGE        SERVICE       NAMESPACE  MANAGEDCLUSTER      URL
foo.default.cluster2             1h        foo          default        cluster2       https://<uid>/

3

The client could use code as the following to access a target service:

        dialerTunnel, err := konnectivity.CreateSingleUseGrpcTunnel(
		context.TODO(),
		net.JoinHostPort(proxyServerHost, proxyServerPort),
		grpc.WithTransportCredentials(grpccredentials.NewTLS(tlsCfg)),
		grpc.WithKeepaliveParams(keepalive.ClientParameters{
			Time: time.Second * 5,
		}),
	)
	if err != nil {
		panic(err)
	}


        // ----- !!! Here is what we add!!! ----
        // There would be a pkg for client to get host conveniently
        hostOfService, err := clusterProxyUtils.GetHost(hubKubeclient, "search", "default", "cluster1")
        if err != nil {
               panic(err)
        }
          

	cfg.Host = hostOfService
	// TODO: flexible client-side tls server name validation
	cfg.TLSClientConfig.Insecure = true
	cfg.TLSClientConfig.CAData = nil
	cfg.TLSClientConfig.CAFile = ""
	cfg.Dial = dialerTunnel.DialContext
	client := kubernetes.NewForConfigOrDie(cfg)

	ns, err := client.CoreV1().
		Namespaces().
		Get(context.TODO(), "default", metav1.GetOptions{})
	if err != nil {
		panic(err)
	}

PS

And if needed, we can add another proxy layer before proxy-server to map https://service.namepsace.managedcluster to https://<uid>.

@xuezhaojun
Copy link
Member Author

Currently, we are using the ExternalName to map cluster name to kubernetes.default.svc.

But with this mode, we can not forward a hostname like serviceA.namespceA.clusterA to the target service we want.

Recently, I tried to use coredns to replace the ExternalName service with the hostname mapping work.

To do this, we need to run a coredns server as sidebar in the same pod of the proxy-agent: #89

Two defects of this approach are:

  1. We need to run as root to expose DNS on port 53, it could cause security issues on some platforms.
  2. We need to provide another coredns image.

Maybe https://coredns.io/explugins/lighthouse/ can help with DNS part.

@xuezhaojun
Copy link
Member Author

Here is another approach @qiujian16

1

First we add spec fields in managedproxyconfiguration to specify the services we want to access:

apiVersion: proxy.open-cluster-management.io/v1alpha1
kind: ManagedProxyConfiguration
metadata:
  name: cluster-proxy
spec:
  authentication:
    dump:
      secrets: {}
    signer:
      type: SelfSigned
  services: # ------- !!! This is what we add ---------
  - name: search
    namespace: default
    cluster: cluster1 # represent the managed cluster's name
  - name: search
    namespace: search
    cluster: cluster1
  - name: foo
    namespace: default
    cluster: cluster2
  proxyServer:
  ...

2

The controller will create a new of resources owned by managedproxyconfiguration named clusterproxyentrypoint in every managed cluster we specify in services fied.

For example, if we run command:

kubectl get clusterproxyentrypoint -n cluster1

we get:

NAME                              AGE        SERVICE       NAMESPACE  MANAGEDCLUSTER      URL
search.search.cluster1             1h        search          search        cluster1       https://<uid>/
search.default.cluster1            1h        search          default       cluster1       https://<uid>/

if we run:

kubectl get clusterproxyentrypoint -n cluster2

we get:

NAME                           AGE        SERVICE       NAMESPACE  MANAGEDCLUSTER      URL
foo.default.cluster2             1h        foo          default        cluster2       https://<uid>/

3

The client could use code as the following to access a target service:

        dialerTunnel, err := konnectivity.CreateSingleUseGrpcTunnel(
		context.TODO(),
		net.JoinHostPort(proxyServerHost, proxyServerPort),
		grpc.WithTransportCredentials(grpccredentials.NewTLS(tlsCfg)),
		grpc.WithKeepaliveParams(keepalive.ClientParameters{
			Time: time.Second * 5,
		}),
	)
	if err != nil {
		panic(err)
	}


        // ----- !!! Here is what we add!!! ----
        // There would be a pkg for client to get host conveniently
        hostOfService, err := clusterProxyUtils.GetHost(hubKubeclient, "search", "default", "cluster1")
        if err != nil {
               panic(err)
        }
          

	cfg.Host = hostOfService
	// TODO: flexible client-side tls server name validation
	cfg.TLSClientConfig.Insecure = true
	cfg.TLSClientConfig.CAData = nil
	cfg.TLSClientConfig.CAFile = ""
	cfg.Dial = dialerTunnel.DialContext
	client := kubernetes.NewForConfigOrDie(cfg)

	ns, err := client.CoreV1().
		Namespaces().
		Get(context.TODO(), "default", metav1.GetOptions{})
	if err != nil {
		panic(err)
	}

PS

And if needed, we can add another proxy layer before proxy-server to map https://service.namepsace.managedcluster to https://<uid>.

Update, now I'm going to implement a prototype based on the aforementioned design. It may take a few days.

@xuezhaojun
Copy link
Member Author

The feature is done. Issue closed.
/close

@openshift-ci
Copy link

openshift-ci bot commented Oct 24, 2022

@xuezhaojun: Closing this issue.

In response to this:

The feature is done. Issue closed.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot closed this as completed Oct 24, 2022
xuezhaojun added a commit to xuezhaojun/cluster-proxy that referenced this issue Apr 23, 2023
… "PreferredDuringScheduling"}'`. (open-cluster-management-io#53)

Signed-off-by: xuezhaojun <zxue@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

4 participants