-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support access not only kube-apiserver but also other services in the managed cluster #53
Comments
|
Hi @yue994488! Would you be able to evaluate the technical feasability of this enhancement request, use case, and how much work it may be? |
|
I have a draft idea about this, we can add a "map" struct apiVersion: proxy.open-cluster-management.io/v1alpha1
kind: ManagedProxyConfiguration
metadata:
name: cluster-proxy
spec:
services: # Here!! This is what we need to add.
search: search.namespace.svc
ui: ui.namespace.svc
authentication:
dump:
secrets: {}
signer:
type: SelfSigned
proxyServer:
image: quay.io/open-cluster-managment.io/cluster-proxy:latest
...The key represents the service we want to access, and the value is the respective service address. The agent will generate several ExternalName Services based on this map structure. And also register on proxy-agent with more agent-identifiers:
And user can use
Is this making sense? Please share more advice, thanks! @qiujian16 @yue9944882 |
|
FYI @yue9944882 this is case that we discussed today. |
|
@xuezhaojun I do not think it could be a cluster scoped configuration. I wonder this should be a per cluster or per clusterset configuration. Also it is not quite formal to use |
|
Agree |
|
in the current implementation, we have an implicit hostname resolver from |
Should I implement a prototype based on this design? |
|
@xuezhaojun that would be really helpful, thanks! |
|
/assign @xuezhaojun |
@qiujian16 I'm not sure about if should be "per cluster" configuration. Say we have 2 manged clusters: clusterA and clusterB, only clusterA have serviceA and we want to access this service A. But it does no harm if clusterB has an externalName-serviceA on it. And per-cluster configuration adds some complexity to design. |
|
The issue is who should be able to set what service to access on spoke. Let's say a user who only can access clusterA and the user wants to set a service on proxy, he/she cannot set this because it needs the permission to update this cluster-scoped configuration. |
I see, that means we assuming user who can access clusterA also have the permission to "enable" other services on clusterA accessible via cluster-proxy. |
|
In other words, it also make sense if it is cluster-scoped. Because the proxy-server itself is cluster-scoped. Even if user only set serviceA on clusterA, but this still is a change to every user. Although they can not pass the auth check, still they can send requests to this new service host. |
|
And another issue is, user who want to access ServiceA on clusterA, may not need to access kube-apiserver on ClusterA. |
I found another issue: if the hostname be But we can't have dot in Service name, So we can't set hostname to |
I think it is fine. We do not have a fine grained rbac for different services on spoke. A user is supposed to be the admin of clusterA. |
I got an idea to deal with this.
|
|
what if you have two service with the same name but in different ns on spoke? |
That would require a map with customized names, such as |
|
Currently, we are using the ExternalName to map But with this mode, we can not forward a hostname like Recently, I tried to use To do this, we need to run a Two defects of this approach are:
|
|
Here is another approach @qiujian16 1First we add spec fields in apiVersion: proxy.open-cluster-management.io/v1alpha1
kind: ManagedProxyConfiguration
metadata:
name: cluster-proxy
spec:
authentication:
dump:
secrets: {}
signer:
type: SelfSigned
services: # ------- !!! This is what we add ---------
- name: search
namespace: default
cluster: cluster1 # represent the managed cluster's name
- name: search
namespace: search
cluster: cluster1
- name: foo
namespace: default
cluster: cluster2
proxyServer:
...2The controller will create a new of resources owned by For example, if we run command: kubectl get clusterproxyentrypoint -n cluster1we get: NAME AGE SERVICE NAMESPACE MANAGEDCLUSTER URL
search.search.cluster1 1h search search cluster1 https://<uid>/
search.default.cluster1 1h search default cluster1 https://<uid>/if we run: kubectl get clusterproxyentrypoint -n cluster2we get: NAME AGE SERVICE NAMESPACE MANAGEDCLUSTER URL
foo.default.cluster2 1h foo default cluster2 https://<uid>/3The client could use code as the following to access a target service: dialerTunnel, err := konnectivity.CreateSingleUseGrpcTunnel(
context.TODO(),
net.JoinHostPort(proxyServerHost, proxyServerPort),
grpc.WithTransportCredentials(grpccredentials.NewTLS(tlsCfg)),
grpc.WithKeepaliveParams(keepalive.ClientParameters{
Time: time.Second * 5,
}),
)
if err != nil {
panic(err)
}
// ----- !!! Here is what we add!!! ----
// There would be a pkg for client to get host conveniently
hostOfService, err := clusterProxyUtils.GetHost(hubKubeclient, "search", "default", "cluster1")
if err != nil {
panic(err)
}
cfg.Host = hostOfService
// TODO: flexible client-side tls server name validation
cfg.TLSClientConfig.Insecure = true
cfg.TLSClientConfig.CAData = nil
cfg.TLSClientConfig.CAFile = ""
cfg.Dial = dialerTunnel.DialContext
client := kubernetes.NewForConfigOrDie(cfg)
ns, err := client.CoreV1().
Namespaces().
Get(context.TODO(), "default", metav1.GetOptions{})
if err != nil {
panic(err)
}PSAnd if needed, we can add another proxy layer before |
Maybe https://coredns.io/explugins/lighthouse/ can help with DNS part. |
Update, now I'm going to implement a prototype based on the aforementioned design. It may take a few days. |
|
The feature is done. Issue closed. |
|
@xuezhaojun: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
… "PreferredDuringScheduling"}'`. (open-cluster-management-io#53) Signed-off-by: xuezhaojun <zxue@redhat.com>
Is it possible for cluster-proxy to support the above feature?
Currently, proxy-agent would direct all traffic into kube-apiserver through this ExternalName type service:
cluster-proxy/pkg/proxyagent/agent/agent.go
Lines 447 to 463 in c83c9c2
But if we do the following:
Therefore another target service on the managed cluster could be accessed by the user on the hub as well.
The text was updated successfully, but these errors were encountered: