Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install in an isolated namespace #166

Closed
retr0h opened this issue May 16, 2019 · 16 comments · Fixed by #208
Closed

Install in an isolated namespace #166

retr0h opened this issue May 16, 2019 · 16 comments · Fixed by #208
Assignees
Milestone

Comments

@retr0h
Copy link

retr0h commented May 16, 2019

I'm attempting to install sealed secrets helm chart into a namespace where my user is "admin" at the role level not the cluster level. I was hoping to install sealed secrets in this namespace to isolate the users from needing access to other namespaces/secrets.

Am I being a numbskull here?

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: {{ item.namespace }}-manager
  namespace: {{ item.namespace }}
rules:
- apiGroups:
  - "*"
  resources:
  - "*"
  verbs:
  - "*"
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: {{ item.namespace }}-binding
  namespace: {{ item.namespace }}
subjects:
{% for item in item.users | default([]) %}
- kind: User
  name: {{ item }}
  apiGroup: rbac.authorization.k8s.io
{% endfor -%}
roleRef:
  kind: Role
  name: {{ item.namespace }}-manager
  apiGroup: rbac.authorization.k8s.io
@retr0h
Copy link
Author

retr0h commented May 17, 2019

I guess this just isn't the way to go about it. I am running the service in kube-system which is the intended way to do this.

The user I have has the access above, but needs access to services/proxy in cluster role.

kubeseal <mysecret.json >mysealedsecret.json
panic: Error fetching certificate: services "http:sealed-secrets-controller:" is forbidden: User "xxx" cannot get resource "services/proxy" in API group "" in the namespace "kube-system"

Any suggestion on cluster role bindings?

@retr0h retr0h closed this as completed May 19, 2019
@retr0h
Copy link
Author

retr0h commented May 19, 2019

Added 'services/proxy' to list of resources read-only user can access.

@mdraijer
Copy link

mdraijer commented Jul 26, 2019

Exactly my point also. Our users have only access to their own namespace(s) and cannot use kubeseal without the extra rolebinding in the kube-system namespace (or at least: the namespace where the controller is running).
This information should at least be in the README.md (I didn't find it there).
Also I do not know what else might be opened up in kube-system for regular users with that role binding: is it safe enough?

Can this issue be reopened to answer my questions?

@mkmik
Copy link
Collaborator

mkmik commented Jul 26, 2019

Yes

@mkmik mkmik reopened this Jul 26, 2019
@mkmik mkmik self-assigned this Jul 29, 2019
@mkmik
Copy link
Collaborator

mkmik commented Jul 29, 2019

@mdraijer just to make sure I understand the problem you're facing:

You have the rights to install the sealed-secret-controller in kube-system or another namespace
but the users of your clusters are unprivileged and kubeseal fails to fetch the controller certificate.

If yes, you have two options:

  1. use offline certificates
  2. configure RBAC so your unprivileged users can access the service via the k8s proxy.

I'll improve the README and find a way to simplify all of this if possible, but in the meantime I hope this can unblock you:


use offline certificates

kubeseal doesn't really need to talk to the cluster. All it does is encrypt your secrets a public key so that only the controller running in the cluster can decrypt it.

At least one user has privileged access

As long as you can provide that key to your users (and they can trust it's the right key), then they can generate sealed-secrets without any access to the cluster.

$ kubeseal --cert /tmp/sealed-cert.pem <secret.yaml >sealed-secret.json

FWIW, I have exactly this situation at work: some of our clusters are configured in such a way and we share a certificate file with our team with a secure mechanism (in our case that's keybase team signed filesystem)

In order to get the certificate you need to do this once from an account that has the rights:

$ kubeseal --controller-namespace=mysealed --fetch-cert >/tmp/sealed-cert.pem

or if you installed it in a dedicated namespace (possibly with a custom controller name, as happens when you use the helm chart):

$ kubeseal --controller-namespace=the-namespace-where-you-installed-it --controller-name=name-of-your-controller --fetch-cert >/tmp/sealed-cert.pem

You have access to logs

$ kubectl -n kube-system get logs -lname=sealed-secrets-controller
....
-----BEGIN CERTIFICATE-----
MIIErTCCApWgAwI
...
$ kubectl -n the-namespace-where-you-installed-it ...

You don't have privileged access at hand and there are no strict network policies

As long as you can run some code in the cluster, you can try to directly curl the service endpoint:

$ kubectl run curl --generator=run-pod/v1 --image=everpeace/curl-jq
pod/curl created
$ kubectl exec -ti curl curl http://sealed-secrets-controller.kube-system.svc.cluster.local:8080/v1/cert.pem
-----BEGIN CERTIFICATE-----
MIIErTCCApWgAwI
$ kubectl delete pod curl

@olliebun
Copy link

olliebun commented Jul 29, 2019

Food for thought: a variant on offline certificates is to add an Ingress to the namespace that runs the sealed-secrets controller, exposing the /v1/cert.pem endpoint.

Rather than mess around with role bindings or distributing the certificate to our developers directly, we've simply documented the URL for the certificate for each cluster.

@mkmik
Copy link
Collaborator

mkmik commented Jul 29, 2019

@ceralena that's a reasonable approach.

I'm struggling to figure out what is the best option that would work out of the box. Ingresses unfortunately don't seem to fit the bill. It's hard to provide instructions for setting up TLS ingress that would work out of the box everywhere (people have different ingress controllers, different load balancers).

Perhaps there is just no one size fits all solution.
Perhaps we should improve and document the way to deal with two main scenarios:

a) a person who can talk to the cluster with kubectl and push sealed secret resources there. The kubectl tool should obtain the certificate using the least amount of privileges possible.

b) a person who cannot talk to the cluster (e.g. all interactions with the cluster are done by a GitOps style interaction mediated by a CI tool which has actual access to the cluster).

Both scenarios have to be secure, i.e. it should be very hard for an attacker to trick the user into encrypting the secret belonging to the attacker.

It's easy for the sealed secret controller to post the cert somewhere; what's harder is for the user to establish that it's the right certificate to use.

@mdraijer
Copy link

Thanks @mkmik, that helps. We will distribute the public key to the users.

@mdraijer
Copy link

Related to the original question in this issue: install in an isolated namespace, i.e. some other namespace than kube-system:
I can't get the sealed-secrets-controller pod running in another namespace. It is always hanging when it tries to find a master key (which is not there: after every attempt I clean the system of everything sealed-secrets related).
Tried with kubectl install, with helm install, with v0.7.0, with v0.8.1, in kube-system and in other namespace. Everything works except the other namespace (both v0.7.0 and v0.8.1, of course only with helm install).
Can you give any pointers as to what I'm doing wrong, or where I can look for clues?

@mkmik
Copy link
Collaborator

mkmik commented Jul 30, 2019

@mdraijer Hmm, it should work. Could you show me the exact commands you issued to install it with kubectl in another namespace?

@mdraijer
Copy link

mdraijer commented Jul 30, 2019

I did not use kubectl, because that would mean editing the yaml first (hard coded namespace).
With helm the command was: helm install --namespace $NAMESPACE --name sealed-secrets stable/sealed-secrets
Logging of the pod:

2019/07/30 08:32:44 Starting sealed-secrets controller version: v0.8.1
2019/07/30 08:32:44 Searching for existing private keys

Thought it could be the resource quota, they were initially exactly the resources requested by the pod. But I made that 10 times as high now:

spec:
  hard:
    limits.cpu: '1'
    limits.memory: 5Gi
    requests.cpu: 500m
    requests.memory: 1280Mi
    requests.storage: 100Mi

@mkmik
Copy link
Collaborator

mkmik commented Jul 30, 2019

I'm not familiar with the helm chart and it's been released independently so I'd prefer to not add a variable here.

Kubectl has a builtin feature that allows you to apply config overlays called kustomize. It's a bit better than helm in that it works on any k8s config file.

$ wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.8.1/controller.yaml
$ kubectl create ns mysealed
$ cat >kustomization.yaml <<EOF
namespace: mysealed
resources:
  - controller.yaml
$ kubectl apply -k .

(you can clean up with kubectl delete -k ., and see diffs between local conf and cluster with kuvectl diff -k .)

It would really help if you could try this (possibly after making sure there is no other sealed secret instance left over)

@mdraijer
Copy link

For now we have settled with an installation in kube-system, but thanks for your help.

@mkmik
Copy link
Collaborator

mkmik commented Jul 30, 2019

@mdraijer thanks anyway; if you'll have some extra time, I'd really appreciate if you manage to reproduce the issue without the helm chart; I'd love to know if there is a bug.

@mkmik
Copy link
Collaborator

mkmik commented Jul 30, 2019

I'll keep this issue open because I think the certificate fetching issue can be improved with some tweaks to RBAC

mkmik pushed a commit that referenced this issue Jul 31, 2019
This allows kubeseal to fetch the certificate public key (and perform other actions such as /verify and /rotate endpoints) even if the caller doesn't have otherwise the rights to access the kube-system namespace (or any other namespace where the sealed-secrets controller might have been deployed), as it often happens that users are not granted such broad permissions on production clusters.

We historically suggested users to just distribute the certificate out of bound and use the `--cert` flag.
However, with the advent of master key rotation, this is becoming increasingly more cumbersome, especially since
it's critical that users end up using the right certificate (i.e. the certificate has to be authenticated).
Master key rotation also requires users to periodically rotate the secrets, which requires access to the /rotate endpoint.

This change includes a fine-grained RBAC rule that allows access to the sealed-secrets controller HTTP API to any authenticated user in the cluster.
Users are still free to disable this feature by applying an override during deployment, but our default RBAC config should include it.

The controller currently exposes the following endpoints:

- `/healthz'
- `/v1/verify`
- `/v1/rotate`
- `/v1/cert.pem`

The controller already must not expose any secrets via the HTTP endpoint, since while RBAC would prevent
end-users to access the service via the proxy, nothing prevents any unprivileged workload in the cluster unless
admins have explicitly configured a strict network policy rule set.

Closes #166
mkmik pushed a commit that referenced this issue Jul 31, 2019
This allows kubeseal to fetch the certificate public key (and perform other actions such as /verify and /rotate endpoints) even if the caller doesn't have otherwise the rights to access the kube-system namespace (or any other namespace where the sealed-secrets controller might have been deployed), as it often happens that users are not granted such broad permissions on production clusters.

We historically suggested users to just distribute the certificate out of bound and use the `--cert` flag.
However, with the advent of master key rotation, this is becoming increasingly more cumbersome, especially since
it's critical that users end up using the right certificate (i.e. the certificate has to be authenticated).
Master key rotation also requires users to periodically rotate the secrets, which requires access to the /rotate endpoint.

This change includes a fine-grained RBAC rule that allows access to the sealed-secrets controller HTTP API to any authenticated user in the cluster.
Users are still free to disable this feature by applying an override during deployment, but our default RBAC config should include it.

The controller currently exposes the following endpoints:

- `/healthz'
- `/v1/verify`
- `/v1/rotate`
- `/v1/cert.pem`

The controller already must not expose any secrets via the HTTP endpoint, since while RBAC would prevent
end-users to access the service via the proxy, nothing prevents any unprivileged workload in the cluster unless
admins have explicitly configured a strict network policy rule set.

Closes #166
bors bot added a commit that referenced this issue Jul 31, 2019
208: Allow access to sealed secret services/proxy to any authenticated user r=mkmik a=mkmik

This allows kubeseal to fetch the certificate public key (and perform other actions such as /verify and /rotate endpoints) even if the caller doesn't have otherwise the rights to access the kube-system namespace (or any other namespace where the sealed-secrets controller might have been deployed), as it often happens that users are not granted such broad permissions on production clusters.

We historically suggested users to just distribute the certificate out of bound and use the `--cert` flag.
However, with the advent of master key rotation, this is becoming increasingly more cumbersome, especially since
it's critical that users end up using the right certificate (i.e. the certificate has to be authenticated).
Master key rotation also requires users to periodically rotate the secrets, which requires access to the /rotate endpoint.

This change includes a fine-grained RBAC rule that allows access to the sealed-secrets controller HTTP API to any authenticated user in the cluster.
Users are still free to disable this feature by applying an override during deployment, but our default RBAC config should include it.

The controller currently exposes the following endpoints:

- `/healthz'
- `/v1/verify`
- `/v1/rotate`
- `/v1/cert.pem`

The controller already must not expose any secrets via the HTTP endpoint, since while RBAC would prevent
end-users to access the service via the proxy, nothing prevents any unprivileged workload in the cluster unless
admins have explicitly configured a strict network policy rule set.

Closes #166
Rel #137

Co-authored-by: Marko Mikulicic <mkm@bitnami.com>
@bors bors bot closed this as completed in #208 Jul 31, 2019
@mkmik mkmik added this to the v0.8.2 milestone Jul 31, 2019
@mkmik
Copy link
Collaborator

mkmik commented Jul 31, 2019

Since #208 the sealed secret controller is accessible even if users have no access to the namespace it's deployed into, I think this issue can be considered closed.

(Keep in mind that until we directly release helm charts from this project, there is no guarantee that config changes as this one will be reflected soon in the helm chart. Please consider using kustomize)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants