Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

find namespaces incorrectly swapped client context and incorrectly using pinniped proxy #5755

Closed
dlaloue-vmware opened this issue Nov 29, 2022 · 3 comments · Fixed by #5940
Closed
Assignees
Labels
component/apis-server Issue related to kubeapps api-server kind/bug An issue that reports a defect in an existing feature

Comments

@dlaloue-vmware
Copy link
Collaborator

Describe the bug
In a multicluster environment, the UI needs to be able to find the namespaces that a logged-in user has access for a given namespace. This is done via the context switcher in the header.
The backend method fetching the namespaces does so by first using the logged-in user permissions, and if the user does not have permissions to list namespaces, the code should try with the provided servicetoken.

there are several issues with the current code:

  • the code has incorrectly swapped the 2 client configs, leading to checks with the wrong permissions
  • when using the servicetoken, the code should not go through the pinniped proxy (pinniped will not recognize the servicetoken)
  • when there is an issue fetching namespaces, the user is logged out of the UI and cannot re-log until the session has timeout (due to SSO)

To Reproduce
To reproduce those issues, the logged-in user should not have cluster admin role, but be a namespace admin only.

Desktop (please complete the following information):

  • kubeapps 12.1.1 / 2.6.1
@dlaloue-vmware dlaloue-vmware added the kind/bug An issue that reports a defect in an existing feature label Nov 29, 2022
@absoludity absoludity self-assigned this Jan 23, 2023
@absoludity
Copy link
Contributor

We have a user who seems to be hitting this issue, so I'm prioritising it :)

@absoludity
Copy link
Contributor

I can reproduce this locally, even with an admin user. Steps to reproduce:

  1. Setup two cluster dev environment with pinniped
  2. Login successfully as kubeapps-operator@example.com
  3. Switch to the second-cluster and be immediately logged out (even though the user has cluster-admin)

Checking the network requests in the browser tab shows

grpc-message | Authorization required to list the Namespaces 'all' due to 'Unauthorized'

followed by the sign-out action (which we trigger on an unauthorized response).

logout

@ppbaena ppbaena added the component/pinniped-proxy Issue related to kubeapps integration with pinniped-proxy label Jan 24, 2023
@absoludity absoludity added component/apis-server Issue related to kubeapps api-server and removed component/pinniped-proxy Issue related to kubeapps integration with pinniped-proxy labels Jan 25, 2023
@absoludity
Copy link
Contributor

I've been working through the code in this section of functionality (since it changed significantly 3 months ago) to understand where the issues lie. From what I've understood so far, the main issue is that the configured service account for a cluster (which is required for users who do not have perms to list namespaces on an additional cluster) is never used. Example below.

The backup check for namespaces is correct for single cluster but incorrect for multicluster

Kubeapps initially checks if it can use the users credential (an id_token) to obtain a list of namespaces (I'll come back to this). If this fails (because the user does not have permission to list namespaces), Kubeapps should try listing the namespaces using a service account. Now, if the selected cluster is the one one which Kubeapps is installed, we just use the pods own service account which is setup to have access - great, this works as configured - but if it is a different cluster, we should be using the service account from the clusters config (when provided).

But in reality, the current code only ever uses the pod service account in this backup case. This is even mentioned in the comment in the code:

// The user doesn't have permissions to list namespaces, then use the current pod's service account
inClusterClient, err := inClusterClientGetter()
if err != nil {
return nil, status.Errorf(codes.Internal, "unable to get the in-cluster k8s client: '%v'", err)
}
namespaces, err = inClusterClient.CoreV1().Namespaces().List(backgroundCtx, metav1.ListOptions{})
if err != nil && k8sErrors.IsForbidden(err) {
// Not even the configured kubeapps-apis service account has permission
return nil, err
}

So, when switching to a different cluster, the list of namespaces obtained before filtering for the user, is the list of namespaces from the cluster on which Kubeapps is installed. This was extremely difficult to determine (for me) due to the fact that in our dev environment we have the same namespace kubeapps-user-namespace on both clusters. To reproduce the issue you can either print out the namespaces in a debug message to verify, or without changing the code, setup two different namespaces on the two clusters:

kubectl --kubeconfig=$HOME/.kube/kind-config-kubeapps-for-pinniped create namespace kubeapps-cluster-default
namespace/kubeapps-cluster-default created

kubectl --kubeconfig=$HOME/.kube/kind-config-kubeapps-for-pinniped-additional create namespace kubeapps-cluster-additional
namespace/kubeapps-cluster-additional created

Create a rolebinding for kubeapps-user in both namespaces on both clusters:

kubectl create rolebinding kubeapps-user --clusterrole=edit --user=kubeapps-user@example.com --namespace=kubeapps-cluster-default --kubeconfig=$HOME/.kube/kind-config-kubeapps-for-pinniped                                         
rolebinding.rbac.authorization.k8s.io/kubeapps-user created

kubectl create rolebinding kubeapps-user --clusterrole=edit --user=kubeapps-user@example.com --namespace=kubeapps-cluster-additional --kubeconfig=$HOME/.kube/kind-config-kubeapps-for-pinniped-additional
rolebinding.rbac.authorization.k8s.io/kubeapps-user created

Verify that kubeapps-user can access secrets in the namespaces on both clusters

kubectl auth can-i --namespace=kubeapps-cluster-default get secrets --as=kubeapps-user@example.com --kubeconfig=$HOME/.kube/kind-config-kubeapps-for-pinniped
yes

kubectl auth can-i --namespace=kubeapps-cluster-additional get secrets --as=kubeapps-user@example.com --kubeconfig=$HOME/.kube/kind-config-kubeapps-for-pinniped-additional
yes

Check in the Kubeapps UI and notice that the namespace for the default cluster is displayed, but when switching to the additional cluster, the namespace for that cluster is not.

The initial request using the user's credentials

As mentioned, the initial request for the list of namespaces on a cluster should first attempted using the users' credential. Though in the code, it is the clusterTypedClientGetter that is used:

typedClient, err := clusterTypedClientGetter()
if err != nil {
return nil, status.Errorf(codes.Internal, "unable to get the k8s client: '%v'", err)
}
backgroundCtx := context.Background()
// Try to list namespaces first with the user token
namespaces, err := typedClient.CoreV1().Namespaces().List(backgroundCtx, metav1.ListOptions{})

which is generated from the s.clusterServiceAccountClientGetter property, which, from the docstring, says that

// clusterServiceAccountClientGetter gets a client getter with service account for additional clusters
clusterServiceAccountClientGetter clientgetter.ClientProviderInterface

So at first I expected this not to work, as it is configured to go via pinniped-proxy (as the OP mentioned), but as it turns out, the client is created with the request context which includes the user token, so the request still gets through works (either succeeding if the user has perms, or resulting in the expected 401 if not). It is just confusing that the cluster service account is configured to be used here for the user request that goes through pinniped, even though it is not used.

Proposal

So, with that in mind, I intend to update the code so that:

  1. The initial user request for the namespace listing uses the user client, not the cluster-service-account client,
  2. The backup request for the namespace listing uses the pod service account if the target is the cluster on which the pod is running (no change), and use the configured cluster service account otherwise (change).

If I have time, I'll additionally try to simplify the code here (there are a lot of functions that create functions being passed around, which can be hard to reason about, IMO), and possibly add a CI test for this with multicluster (that doesn't use the kubeapps-user-namespace only, as this exists on both clusters so hides the issue).

@ppbaena ppbaena added this to the Technical debt milestone Feb 1, 2023
absoludity added a commit that referenced this issue Feb 1, 2023
…es. (#5940)

<!--
Before you open the request please review the following guidelines and
tips to help it be more easily integrated:

 - Describe the scope of your change - i.e. what the change does.
 - Describe any known limitations with your change.
- Please run any tests or examples that can exercise your modified code.

 Thank you for contributing!
 -->

### Description of the change

<!-- Describe the scope of your change - i.e. what the change does. -->

The main change here is to ensure that the request for namespaces uses
the *user* client first, then the service account client. For a detailed
analysis of the issue it is fixing, see my [comment on
5755](#5755 (comment))

While there, I have tried to clarify the code a little, so that it is
clearer when the user client is being used (it now matches the comments
again).

I also fixed an RBAC issue in the dev environment (so that
`kubeapps-user@example.com` is permitted to access, in addition to
`oidc:kubeapps-user@example.com`, since the former is what pinniped
uses).

### Benefits

<!-- What benefits will be realized by the code change? -->
Back to expected behavior when authenticated as a non-privileged user
with or without a service account token configured in Kubeapps' clusters
configuration.

### Possible drawbacks

<!-- Describe any known limitations with your change -->

### Applicable issues

<!-- Enter any applicable Issues here (You can reference an issue using
#) -->

- fixes #5755 

### Additional information

<!-- If there's anything else that's important and relevant to your pull
request, mention that information here.-->

I've tested this pretty thoroughly locally with multiple clusters using
both unprivileged and privileged users (with additional log lines
showing exactly what token is being used when), but only in the
namespaces call-site.

---------

Signed-off-by: Michael Nelson <minelson@vmware.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/apis-server Issue related to kubeapps api-server kind/bug An issue that reports a defect in an existing feature
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

3 participants