Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unauthorized response in GetPackageRepositorySummaries #5999

Closed
mecampbellsoup opened this issue Feb 15, 2023 · 12 comments · Fixed by #6016
Closed

Unauthorized response in GetPackageRepositorySummaries #5999

mecampbellsoup opened this issue Feb 15, 2023 · 12 comments · Fixed by #6016
Assignees
Labels
component/apis-server Issue related to kubeapps api-server component/auth Issue related to kubeapps authentication (AuthN/AuthZ/RBAC/OIDC) kind/bug An issue that reports a defect in an existing feature

Comments

@mecampbellsoup
Copy link
Contributor

mecampbellsoup commented Feb 15, 2023

Invalid GetPackageRepositorySummaries response from the plugin helm.packages: rpc error: code = Unauthenticated desc = Authorization required to get the AppRepository 'all' due to 'Unauthorized'

We're trying to upgrade to kubeapps 2.6.3 (latest release) and I've encountered a strange bug when trying to load the Catalog.

Seemingly due to an Unauthorized GRPC response, the dashboard redirects back to / and I end up right where I started (i.e. main dashboard page viewing applications).

  1. Load kubeapps
  2. Click Catalog
  3. Redirected back to dashboard

image
image
image

Is there some new configuration now that we need to do to permit our users to view package repositories?

We have 2 AppRepository configured in this cluster (BTW what is the difference to the "PackageRepository" concept?):

(⎈ default:kubeapps)mcampbell-1 :: coreweave/k8s-services/dev ‹master*› » k neat get apprepository
apiVersion: v1
items:
- apiVersion: kubeapps.com/v1alpha1
  kind: AppRepository
  metadata:
    annotations:
      meta.helm.sh/release-name: kubeapps
      meta.helm.sh/release-namespace: kubeapps
    labels:
      app.kubernetes.io/instance: kubeapps
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: kubeapps
      helm.sh/chart: kubeapps-12.2.1-dev0
    name: ant-repo
    namespace: kubeapps
  spec:
    syncJobPodTemplate:
      spec:
        securityContext:
          runAsUser: 1001
    type: helm
    url: http://charts.tenant-rizzo.ord1.ingress.coreweave.cloud/
- apiVersion: kubeapps.com/v1alpha1
  kind: AppRepository
  metadata:
    annotations:
      meta.helm.sh/release-name: kubeapps
      meta.helm.sh/release-namespace: kubeapps
    labels:
      app.kubernetes.io/instance: kubeapps
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: kubeapps
      helm.sh/chart: kubeapps-12.2.1-dev0
    name: coreweave
    namespace: kubeapps
  spec:
    syncJobPodTemplate:
      spec:
        securityContext:
          runAsUser: 1001
    type: helm
    url: http://helm.corp.ingress.ord1.coreweave.com/
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
@mecampbellsoup
Copy link
Contributor Author

For more context: we have an auth proxy that sits in front of kubeapps and our k8s apiserver. Here is our cluster configuration:

kubeapps:
  clusters:
    - name: default
      apiServiceURL: https://cloud-app-kubernetes-ingress.cloud/k8s
      insecure: false
      certificateAuthorityData: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvekNDQWVlZ0F3SUJBZ0lSQVA0VkppbWxoallaUnhkRjkyTC9MbVV3RFFZSktvWklodmNOQVFFTEJRQXcKR1RFWE1CVUdBMVVFQXhNT1kyOXlaWGRsWVhabExuUmxjM1F3SGhjTk1qTXdNakF5TVRVek1ERTRXaGNOTXpNdwpNakF5TURNek1ERTRXakFaTVJjd0ZRWURWUVFERXc1amIzSmxkMlZoZG1VdWRHVnpkRENDQVNJd0RRWUpLb1pJCmh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTGl5U3lYN0V1dWI0ZytiTjFxUzUrT1RnclNhT0xaeEFwdGMKNWhkUXpYRWgvOFQ2YkNMMTdEb2lKVkExeGlZdXJFeDdHRnpFVWZFWWgvbDAyRjV4V01LMDRjUU9OQ1htRlNiNQpJMFhXTXpTWWpOaFpVdHZCOW5VMWlwZkUwSnI0Q2g4MEhwOUlmN2RLQXd1dHlMR2oxWGk0b2lHVmV0OFJ0akQyCjdBdERWS1JRa1BOSzVTSElyZjNxQzFwMEZTa1VwZUJ5bXJFSGNCZEF0ZTZnOXlsZDh3cU8zR2RGZjZWSW5BSHcKL0szcUtRK1VVUVY0dXRqWklKZ0JzTit4Sy9zanN0OUVlYloxWjZOTGx5YXIyT2p3bE1DdFVrUkhFZXFkRFp0bQo3cmw4ajZ0dmQzL1NQY0NRT2NQeWwwSDVxa0J4L3hUK0I3b0dpZmVrbm96QUxzay85V0VDQXdFQUFhTkNNRUF3CkRnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3SFFZRFZSME9CQllFRkIveE5LTHkKaVVOSThIaGtJN0doZ2c4alI5QzJNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUFoMFFOQ0ZuSVhVZ04vT2RXUApseGFScWk1cjU4VitxK245eTZZRHV3QmZYTUgreFFOdHphR3pTdm9TQ0paMTd2YVNheWxsTkdEV0hRUTJMU0tZClQ2ME1ZTE1aZnJHQVc5ZjVBampmek5ySURCdU4xKzZ1NzZ0T3BDcVBCeEEzMTB4LzhxRFEyeWJqL0p3N0w0WGwKSjR1OXlkeldXYjJZM3ZEOTVqTi9qNCt6dytIcUx5aGs2Mjg2MEJXOCtNakErMkNYVFJYdERUUGRRcTR2aGtrdQoyWlZ2L2pWUFFBeXQ0NW1XYkZQMVdqNlhEZlpvdzE5STRqTjM4ZkQ5L1I4TVFKMk1pZjNRb05oT2MzRlU1TnhnCmp6R0Q5dDN3US9kVHVCZy9jWHRPUEh1RlJZcGM3bXVlT1JkajMweWdrSTJwM0wwcm95LzZvekVzUkpzWU4vWW8KZnFlagotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==

And here are logs from kubeapps-internal-kubeappsapis deployment from boot through the redirect described above:

(⎈ default:kubeapps)mcampbell-1 :: github/coreweave/kubeapps ‹v2.6.3*› » k logs -f deployments/kubeapps-internal-kubeappsapis
Found 2 pods, using pod/kubeapps-internal-kubeappsapis-9ffd56d6d-xrm85
I0215 16:08:56.062716       1 root.go:36] "kubeapps-apis has been configured with serverOptions" serverOptions={Port:50051 PluginDirs:[/plugins/helm-packages /plugins/resources] ClustersConfigPath:/config/clusters.conf PluginConfigPath:/config/kubeapps-apis/plugins.conf PinnipedProxyURL:http://kubeapps-internal-pinniped-proxy.kubeapps:3333 PinnipedProxyCACert: GlobalHelmReposNamespace:kubeapps UnsafeLocalDevKubeconfig:false QPS:50 Burst:100}
I0215 16:08:56.550892       1 server.go:101] +helm NewServer(globalPackagingCluster: [-], globalPackagingNamespace: [kubeapps], pluginConfigPath: [/config/kubeapps-apis/plugins.conf]
I0215 16:08:56.551234       1 server.go:112] +helm using custom config: [{{3 3 3} 300 }]
I0215 16:08:56.551276       1 server.go:124] +helm NewServer effective globalPackagingNamespace: [kubeapps]
I0215 16:08:56.552162       1 plugins.go:152] "Successfully registered plugin" pluginPath="/plugins/helm-packages/helm-packages-v1alpha1-plugin.so"
I0215 16:08:57.148016       1 server.go:119] +resources using custom config: [{{x-consumer-permissions r:ns-([a-z0-9-]+):base}}]
I0215 16:08:57.149001       1 plugins.go:152] "Successfully registered plugin" pluginPath="/plugins/resources/resources-v1alpha1-plugin.so"
I0215 16:08:57.149186       1 packages.go:49] Registered name:"helm.packages" version:"v1alpha1" for core.packaging.v1alpha1 packages aggregation.
I0215 16:08:57.149540       1 repositories.go:50] Registered name:"helm.packages" version:"v1alpha1" for core.packaging.v1alpha1 repositories aggregation.
I0215 16:08:57.150016       1 server.go:163] Starting server on :50051






I0215 16:10:07.143201       1 namespaces.go:27] "+resources CheckNamespaceExists" cluster="default" namespace="default"
I0215 16:10:07.178993       1 server.go:62] PermissionDenied 35.839623ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CheckNamespaceExists
I0215 16:10:07.197921       1 namespaces.go:115] "+resources CanI" cluster="default" namespace="" group="" resource="namespaces" verb="create"
I0215 16:10:07.204237       1 server.go:62] OK 6.327054ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CanI
I0215 16:10:07.242919       1 namespaces.go:27] "+resources CheckNamespaceExists" cluster="default" namespace="tenant-dev-24fdb0-one"
I0215 16:10:07.244018       1 packages.go:58] "+core GetAvailablePackageSummaries" cluster="default" namespace="tenant-dev-24fdb0-one"
I0215 16:10:07.244125       1 server.go:198] "+helm GetAvailablePackageSummaries" cluster="default" namespace="tenant-dev-24fdb0-one"
I0215 16:10:07.248297       1 server.go:62] OK 5.401979ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CheckNamespaceExists
I0215 16:10:07.272115       1 server.go:198] "+helm GetAvailablePackageSummaries" cluster="default" namespace="tenant-dev-24fdb0-one"
I0215 16:10:07.272207       1 server.go:62] OK 28.1984ms /kubeappsapis.core.packages.v1alpha1.PackagesService/GetAvailablePackageSummaries
I0215 16:10:07.348043       1 namespaces.go:27] "+resources CheckNamespaceExists" cluster="default" namespace="default"
I0215 16:10:07.352662       1 server.go:62] PermissionDenied 4.655141ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CheckNamespaceExists
I0215 16:10:07.411343       1 namespaces.go:27] "+resources CheckNamespaceExists" cluster="default" namespace="tenant-dev-24fdb0-one"
I0215 16:10:07.417590       1 server.go:62] OK 6.271536ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CheckNamespaceExists
I0215 16:10:07.474212       1 namespaces.go:27] "+resources CheckNamespaceExists" cluster="default" namespace="default"
I0215 16:10:07.476595       1 server.go:62] PermissionDenied 2.409901ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CheckNamespaceExists
I0215 16:10:07.528739       1 packages.go:58] "+core GetAvailablePackageSummaries" cluster="default" namespace="tenant-dev-24fdb0-one"
I0215 16:10:07.528742       1 namespaces.go:27] "+resources CheckNamespaceExists" cluster="default" namespace="tenant-dev-24fdb0-one"
I0215 16:10:07.528840       1 server.go:198] "+helm GetAvailablePackageSummaries" cluster="default" namespace="tenant-dev-24fdb0-one"
I0215 16:10:07.529728       1 repositories.go:116] "+core GetPackageRepositorySummaries" cluster="" namespace=""
I0215 16:10:07.529770       1 server.go:1134] +helm GetPackageRepositorySummaries [context:{}]
I0215 16:10:07.530027       1 repositories_resources.go:30] +helm getPkgRepositoryResource [&{0xc000582a80  {kubeapps.com v1alpha1 apprepositories}}]
I0215 16:10:07.531303       1 server.go:62] Unauthenticated 1.594709ms /kubeappsapis.core.packages.v1alpha1.RepositoriesService/GetPackageRepositorySummaries
I0215 16:10:07.543083       1 server.go:198] "+helm GetAvailablePackageSummaries" cluster="default" namespace="tenant-dev-24fdb0-one"
I0215 16:10:07.543149       1 server.go:62] OK 14.425292ms /kubeappsapis.core.packages.v1alpha1.PackagesService/GetAvailablePackageSummaries
I0215 16:10:07.543350       1 server.go:62] OK 14.630681ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CheckNamespaceExists
I0215 16:10:09.112174       1 namespaces.go:27] "+resources CheckNamespaceExists" cluster="default" namespace="default"
I0215 16:10:09.118021       1 server.go:62] PermissionDenied 5.875455ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CheckNamespaceExists






I0215 16:10:21.623449       1 namespaces.go:82] "+resources GetNamespaceNames " cluster="default"
I0215 16:10:21.623628       1 server.go:62] OK 207.575µs /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/GetNamespaceNames
I0215 16:10:21.623713       1 namespaces.go:115] "+resources CanI" cluster="default" namespace="" group="" resource="namespaces" verb="create"
I0215 16:10:21.630057       1 server.go:62] OK 6.353578ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CanI

@ppbaena ppbaena added the kind/bug An issue that reports a defect in an existing feature label Feb 16, 2023
@mecampbellsoup
Copy link
Contributor Author

mecampbellsoup commented Feb 16, 2023

I noticed that the request is using context of cluster="" namespace="".

I hard-coded those values to cluster="default" namespace="kubeapps" (which is our existing apprepo/kubeapps namespace name) and these requests started working as expected.

diff --git a/cmd/kubeapps-apis/plugins/helm/packages/v1alpha1/repositories.go b/cmd/kubeapps-apis/plugins/helm/packages/v1alpha1/repositories.go
index 5782e3907..004927670 100644
--- a/cmd/kubeapps-apis/plugins/helm/packages/v1alpha1/repositories.go
+++ b/cmd/kubeapps-apis/plugins/helm/packages/v1alpha1/repositories.go
@@ -563,12 +563,12 @@ func (s *Server) repoSummaries(ctx context.Context, cluster string, namespace st

 // GetPkgRepositories returns the list of package repositories for the given cluster and namespace
 func (s *Server) GetPkgRepositories(ctx context.Context, cluster, namespace string) ([]*apprepov1alpha1.AppRepository, error) {
-       resource, err := s.getPkgRepositoryResource(ctx, cluster, namespace)
+       resource, err := s.getPkgRepositoryResource(ctx, "default", "kubeapps")
// GetPkgRepositories returns the list of package repositories for the given cluster and namespace
func (s *Server) GetPkgRepositories(ctx context.Context, cluster, namespace string) ([]*apprepov1alpha1.AppRepository, error) {
	//resource, err := s.getPkgRepositoryResource(ctx, cluster, namespace)
	resource, err := s.getPkgRepositoryResource(ctx, "default", "kubeapps")
	if err != nil {
		log.Infof("+[ERROR] helm GetPkgRepositories [%v]", err)
		return nil, err
	}
	log.InfoS("+[MATT] s.getPkgRepositoryResource", "resource", resource, "ctx", ctx, "cluster", "default", "namespace", "kubeapps")

	unstructured, err := resource.List(ctx, metav1.ListOptions{})
	log.InfoS("+[MATT] GetPkgRepositories unstructured", "unstructured", unstructured, "err", err)

Do you know why the frontend would be sending a gRPC request with that context cluster="" namespace=""?

Upon hardcoding I can view the catalog as expected, but cannot actually deploy a chart:

image
image

@ppbaena
Copy link
Collaborator

ppbaena commented Feb 17, 2023

@absoludity could you take a look at this issue to properly triage it? Thanks!

@absoludity
Copy link
Contributor

We have 2 AppRepository configured in this cluster (BTW what is the difference to the "PackageRepository" concept?):

It's just terminology: we were adding support for other types of repositories (Carvel, Flux) and needed common terminology in the code, so followed the precedent that was used in other VMware packaging projects.

I haven't been able to reproduce this yet, but do have one question: in your clusters config, why are you setting the apiServiceURL, assuming that this is the cluster on which Kubeapps is installed itself (so it should, by default, be using the default cluster internal address and cert which are on the pod, which is what happens if you don't specify it).

I'll keep trying to repro...

@mecampbellsoup
Copy link
Contributor Author

We have 2 AppRepository configured in this cluster (BTW what is the difference to the "PackageRepository" concept?):

It's just terminology: we were adding support for other types of repositories (Carvel, Flux) and needed common terminology in the code, so followed the precedent that was used in other VMware packaging projects.

Gotcha, thanks for clarifying that.

I haven't been able to reproduce this yet, but do have one question: in your clusters config, why are you setting the apiServiceURL, assuming that this is the cluster on which Kubeapps is installed itself (so it should, by default, be using the default cluster internal address and cert which are on the pod, which is what happens if you don't specify it).

We do this kind of hybrid approach (where we are as you point out really using the default cluster, but point to a different hostname via apiServiceURL) because we want to use our API gateway to authenticate/authorize all kubeapps requests to the k8s apiserver. So, in our kubeapps helm chart, we've added an Ingress (and a Service and Endpoint but I'll spare those details) so that requests from the kubeapps dashboard to the kubeapps backend(s) pass through our gateway and have our auth proxy headers X-Consumer-Username and X-Consumer-Permissions set.

I'll keep trying to repro...

Thanks @absoludity appreciate the efforts very much 🙏

Let me know if I can share anything else!

@absoludity
Copy link
Contributor

absoludity commented Feb 20, 2023

We do this kind of hybrid approach (where we are as you point out really using the default cluster, but point to a different hostname via apiServiceURL) because we want to use our API gateway to authenticate/authorize all kubeapps requests to the k8s apiserver.

Right, thanks for the explanation. So yes, I expect this is what's causing the difference in behavior. If you look at:

const supportedCluster = cluster === kubeappsCluster;
useEffect(() => {
if (
!namespace ||
!supportedCluster ||
[helmGlobalNamespace, carvelGlobalNamespace].includes(namespace)
) {
// All Namespaces. Global namespace or other cluster, show global repos only
dispatch(actions.repos.fetchRepoSummaries(""));
return () => {};
}

the code is deciding there: if the current cluster is not the kubeappsCluster, it'll send the request without the namespace. Furthermore, if an empty namespace is passed to fetchRepoSummaries, it will send the kubeappsCluster as the cluster, (which in your case is currently determined to be unset by Kubeapps):

const repos = await PackageRepositoriesService.getPackageRepositorySummaries({
cluster: namespace ? currentCluster : kubeappsCluster,
namespace: namespace,
});

So why is Kubeapps unable to determine the cluster on which Kubeapps is installed here when there is only one cluster defined in your config? Because it's actually possible to install Kubeapps on one cluster (say, a management cluster) while configuring the clusters so that users can only install to a different target cluster... and that is what Kubeapps has determined here.

Which means the fix should be trivial: you should be able to simply add the isKubeappsCluster to explicitly let Kubeapps know that this cluster is the one on which Kubeapps is installed:

## - isKubeappsCluster is an optional parameter that allows defining the cluster in which Kubeapps is installed;
## this param is useful when every cluster is using an apiServiceURL (e.g., when using the Pinniped Impersonation Proxy)
## as the chart cannot infer the cluster on which Kubeapps is installed in that case.

Try setting that to true for your cluster and reload in your browser window and you should find that the request is sent with the correct context. Let me know if not.

@mecampbellsoup
Copy link
Contributor Author

I just tried it quickly and it seems like due to setting isKubeappsCluster: true I can't get past the loading page...

image

Specifically, now CheckNamespaceExists fails (and the page redirects in a loop):

[kubeappsapis] I0220 05:34:55.329917       1 namespaces.go:27] "+resources CheckNamespaceExists" cluster="default" namespace="default"
[kubeappsapis] I0220 05:34:55.331818       1 server.go:62] Unauthenticated 1.929287ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CheckNamespaceExists
[kubeappsapis] I0220 05:34:56.676626       1 namespaces.go:27] "+resources CheckNamespaceExists" cluster="default" namespace="default"
[kubeappsapis] I0220 05:34:56.678004       1 server.go:62] Unauthenticated 1.417298ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CheckNamespaceExists
[kubeappsapis] I0220 05:34:59.051849       1 namespaces.go:27] "+resources CheckNamespaceExists" cluster="default" namespace="default"
[kubeappsapis] I0220 05:34:59.054558       1 server.go:62] Unauthenticated 2.74887ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CheckNamespaceExists
[kubeappsapis] I0220 05:35:00.425865       1 namespaces.go:27] "+resources CheckNamespaceExists" cluster="default" namespace="default"
[kubeappsapis] I0220 05:35:00.428021       1 server.go:62] Unauthenticated 2.178464ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CheckNamespaceExists
[kubeappsapis] I0220 05:35:01.316712       1 namespaces.go:27] "+resources CheckNamespaceExists" cluster="default" namespace="default"
[kubeappsapis] I0220 05:35:01.318700       1 server.go:62] Unauthenticated 2.006769ms /kubeappsapis.plugins.resources.v1alpha1.ResourcesService/CheckNamespaceExists
  clusters:
    - name: default
      isKubeappsCluster: true
      apiServiceURL: https://cloud-app-kubernetes-ingress.cloud/k8s
      insecure: false
      certificateAuthorityData: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvekNDQWVlZ0F3SUJBZ0lSQVA0VkppbWxoallaUnhkRjkyTC9MbVV3RFFZSktvWklodmNOQVFFTEJRQXcKR1RFWE1CVUdBMVVFQXhNT1kyOXlaWGRsWVhabExuUmxjM1F3SGhjTk1qTXdNakF5TVRVek1ERTRXaGNOTXpNdwpNakF5TURNek1ERTRXakFaTVJjd0ZRWURWUVFERXc1amIzSmxkMlZoZG1VdWRHVnpkRENDQVNJd0RRWUpLb1pJCmh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTGl5U3lYN0V1dWI0ZytiTjFxUzUrT1RnclNhT0xaeEFwdGMKNWhkUXpYRWgvOFQ2YkNMMTdEb2lKVkExeGlZdXJFeDdHRnpFVWZFWWgvbDAyRjV4V01LMDRjUU9OQ1htRlNiNQpJMFhXTXpTWWpOaFpVdHZCOW5VMWlwZkUwSnI0Q2g4MEhwOUlmN2RLQXd1dHlMR2oxWGk0b2lHVmV0OFJ0akQyCjdBdERWS1JRa1BOSzVTSElyZjNxQzFwMEZTa1VwZUJ5bXJFSGNCZEF0ZTZnOXlsZDh3cU8zR2RGZjZWSW5BSHcKL0szcUtRK1VVUVY0dXRqWklKZ0JzTit4Sy9zanN0OUVlYloxWjZOTGx5YXIyT2p3bE1DdFVrUkhFZXFkRFp0bQo3cmw4ajZ0dmQzL1NQY0NRT2NQeWwwSDVxa0J4L3hUK0I3b0dpZmVrbm96QUxzay85V0VDQXdFQUFhTkNNRUF3CkRnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3SFFZRFZSME9CQllFRkIveE5LTHkKaVVOSThIaGtJN0doZ2c4alI5QzJNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUFoMFFOQ0ZuSVhVZ04vT2RXUApseGFScWk1cjU4VitxK245eTZZRHV3QmZYTUgreFFOdHphR3pTdm9TQ0paMTd2YVNheWxsTkdEV0hRUTJMU0tZClQ2ME1ZTE1aZnJHQVc5ZjVBampmek5ySURCdU4xKzZ1NzZ0T3BDcVBCeEEzMTB4LzhxRFEyeWJqL0p3N0w0WGwKSjR1OXlkeldXYjJZM3ZEOTVqTi9qNCt6dytIcUx5aGs2Mjg2MEJXOCtNakErMkNYVFJYdERUUGRRcTR2aGtrdQoyWlZ2L2pWUFFBeXQ0NW1XYkZQMVdqNlhEZlpvdzE5STRqTjM4ZkQ5L1I4TVFKMk1pZjNRb05oT2MzRlU1TnhnCmp6R0Q5dDN3US9kVHVCZy9jWHRPUEh1RlJZcGM3bXVlT1JkajMweWdrSTJwM0wwcm95LzZvekVzUkpzWU4vWW8KZnFlagotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==

@absoludity
Copy link
Contributor

OK, which brings me to another question: what rules is your auth-proxy applying? I'm not sure, but suspect the gRPC-web request from the dashboard is not arriving at the kubeapps-apis service with the creds? (I was wondering about this in the earlier setup). I'll try to take a look tomorrow to see why any gRPC-web auth header could be stripped out, but it's not clear without knowing your auth-proxy.

Or another possible (and better) way to find out: try setting up without your auth-proxy while still having an APIServiceURL set (maybe using the internal address). This will enable us to isolate whether it is a bug or an issue with the auth-proxy.

@mecampbellsoup
Copy link
Contributor Author

So it looks like CheckNamespaceExists is sent from the dashboard at domain apps.coreweave.test in my example, then passes through our API gateway, is then proxied to kubeappsapis, which makes a call to the k8s API.

The behavior at our gateway (HAProxy) looks correct: it's getting the sessionid cookie out of the request header and successfully authenticating that request (which in our case means adding proxy headers X-Consumer-Username and X-Consumer-Permissions).

For kubeappsapis -> k8s request to succeed, the kubeappsapis backend would need to include those proxied request headers, like is done w/ the namespace filtering feature if you recall that one.

I think this is the issue but am not 100%...

@mecampbellsoup
Copy link
Contributor Author

OK, which brings me to another question: what rules is your auth-proxy applying? I'm not sure, but suspect the gRPC-web request from the dashboard is not arriving at the kubeapps-apis service with the creds?

It reads the sessionid key-value out of the request cookie, and adds 2 proxy headers as I described above. My suspicion is that the request is arriving w/ the credentials but those headers aren't then included from kubeapps-apis services -> k8s...

@absoludity
Copy link
Contributor

Yep, that would not only make sense, but be very probably - as we don't have any e2e test for your special auth. So prior to our new API, the dashboard was making those requests directly to the k8s API (and in your case, via your proxy) so k8s would see it.

OK, let me put a PR together and you can try the image. Thanks for the details - much easier now :)

@ppbaena ppbaena added component/auth Issue related to kubeapps authentication (AuthN/AuthZ/RBAC/OIDC) component/apis-server Issue related to kubeapps api-server labels Feb 21, 2023
@ppbaena ppbaena added this to the Technical debt milestone Feb 21, 2023
@absoludity
Copy link
Contributor

Something like #6012 . Haven't tested it in real life yet though.

absoludity added a commit that referenced this issue Feb 23, 2023
…6016)

<!--
Before you open the request please review the following guidelines and
tips to help it be more easily integrated:

 - Describe the scope of your change - i.e. what the change does.
 - Describe any known limitations with your change.
- Please run any tests or examples that can exercise your modified code.

 Thank you for contributing!
 -->

### Description of the change

<!-- Describe the scope of your change - i.e. what the change does. -->
After some testing of the previous PR, we found the issue is actually
that kubeapps assumes the cluster on which Kubeapps is installed will
never have an APIServiceURL set in the configuration (since it can be
accessed via the in-cluster configuration at
https://kubernetes.default).

As it turns out, some users need to set the APIServiceURL of the cluster
on which Kubeapps is installed because they use a proxy in front of the
API server for authentication purposes, so it's important that Kubeapps
also use this.

### Benefits

<!-- What benefits will be realized by the code change? -->

### Possible drawbacks

<!-- Describe any known limitations with your change -->

### Applicable issues

<!-- Enter any applicable Issues here (You can reference an issue using
#) -->

- fixes #5999 

### Additional information

<!-- If there's anything else that's important and relevant to your pull
request, mention that information here.-->

Signed-off-by: Michael Nelson <minelson@vmware.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/apis-server Issue related to kubeapps api-server component/auth Issue related to kubeapps authentication (AuthN/AuthZ/RBAC/OIDC) kind/bug An issue that reports a defect in an existing feature
Projects
Archived in project
3 participants