Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Repos API/Helm - Support global private repos #5129

Open
antgamdia opened this issue Jul 26, 2022 · 2 comments · Fixed by #5140 or #5183
Open

Repos API/Helm - Support global private repos #5129

antgamdia opened this issue Jul 26, 2022 · 2 comments · Fixed by #5140 or #5183
Assignees
Labels
component/apis-server Issue related to kubeapps api-server component/apprepository Issue related to kubeapps apprepository kind/bug An issue that reports a defect in an existing feature

Comments

@antgamdia
Copy link
Contributor

Summary
We aren't copying the namespace repos' secrets to the global namespace in the new implementation, as opposed to the former implementation.

Description
I've been working on polishing up the UI leftovers but, when I finished fixing the e2e tests, I noticed they weren't working with the new API.
After some debugging time, I noticed the error thrown by the job (asset-syncer) is sth like: Error: secret default-apprepo-my-repo-88078 not found
Note that the secrets' name for namespace repos are like apprepo-%s . However, there is another function that adds the namespace as a prefix, like %s-apprepo-%s
This function is referenced here... pointing out an actual issue (that we, maybe mistakenly, closed).

To the best of my knowledge, it seems the asset-syncer is spinning up the sync jobs in the global namespace and therefore it is requiring every namespace repo's secret to be copied to that location. Otherwise, private repos won't sync.

So, if my assumptions are right, we could either:

  1. Adapt the current Helm plugin's code to also copy the secret as we were already doing in the kubeops-based API.
  2. Adapt the asset-syncer's code to create the jobs in each repo's namespace (aka fixing Move app repo sync to run in repo namespace (and cluster) #1647 )
  3. Continue to work on the apprepo controller full revamp (no more postgres, etc...) that Greg was exploring.

Given the current bandwidth and priorities, I'd simply go with 1) and reopen #1647.

Acceptance criteria

Additional context

As @absoludity commented, we could give cluster-wide access to read secrets to the controller. Although it's not great that our controller would have that access, I think it's the most sensible option right now to support private repos while they're indexed by the controller explicitly. That's also an option for us, and would be better than replicating the ugly copying of the secret.
Perhaps that can even be refined these days so that our controller only has access to the specific secret object that it needs. Or we could only enable that RBAC if support for private repos is enabled or something so that it's not there by default

@antgamdia antgamdia added component/apprepository Issue related to kubeapps apprepository component/apis-server Issue related to kubeapps api-server labels Jul 26, 2022
@ppbaena ppbaena added the kind/bug An issue that reports a defect in an existing feature label Jul 26, 2022
@antgamdia
Copy link
Contributor Author

Reopened as per #5115 (comment)

@dlaloue-vmware
Copy link
Collaborator

I am reopening this issue.
The code has been changed to copy the secret to the namespace managed by kubeapps, but the copy is made under the logged-in user credentials. A user that has admin priviledge only to a given namespace, where the new repository is created, is unlikely to have secret permissions in the kubeapps namespace, and the copy fails.
The code that does the copy must switch to the pod's context in order to perform the copy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/apis-server Issue related to kubeapps api-server component/apprepository Issue related to kubeapps apprepository kind/bug An issue that reports a defect in an existing feature
Projects
Archived in project
5 participants
@ppbaena @antgamdia @castelblanque @dlaloue-vmware and others