Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Operator mounts BSL credential secret into velero pod breaking multiple credential support #452

Closed
1 task done
dymurray opened this issue Nov 12, 2021 · 4 comments
Closed
1 task done
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@dymurray
Copy link
Member

Contact Details

No response

Describe bug

There is a use case where a user wants to use 2 different secrets providing credentials for the BSL and VSL. When this happens, the secret that gets mounted into the velero pod must be the VSL secret.

In our API, if a user specifies a secret in the BSL spec, then we assume this is the single set of credentials to be used with Velero, and we automatically mount this secret into the velero pod. With this approach, we will never be able to support the use case above.

I propose that we allow a new config field that can explicitly set the credentials to be used for the VSL (or maybe just mention this is the secret that gets mounted in the pod itself) so that if a user wants to specify a secret for the BSL, we can still mount a different secret in the Velero pod until the VSL API can support specifying credentials as well.

What happened?

A bug happened!

OADP Version

0.4.x (Beta)

OpenShift Version

4.9

Velero pod logs

No response

Restic pod logs

No response

Operator pod logs

No response

New issue

  • This issue is new
@dymurray dymurray added the kind/bug Categorizes issue or PR as related to a bug. label Nov 12, 2021
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 10, 2022
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 12, 2022
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this as completed Apr 11, 2022
@openshift-ci
Copy link

openshift-ci bot commented Apr 11, 2022

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
No open projects
Development

No branches or pull requests

2 participants