Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

Hydra #1022

Closed
wants to merge 22 commits into from
Closed

Hydra #1022

wants to merge 22 commits into from

Conversation

kminehart
Copy link

No description provided.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels May 8, 2017
@k8s-ci-robot
Copy link
Contributor

Hi @kminehart. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with @k8s-bot ok to test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
type: ClusterIP
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be dynamically set

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The service type is now configurable via. values.yaml.

app: {{ template "fullname" . }}
ports:
- name: service
port: 4444
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should also be dynamically set

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've made the service port configurable in values.yaml.

- name: DATABASE_URL
value: postgres://{{ .Values.postgresql.postgresUser }}:{{ .Values.postgresql.postgresPassword }}@{{ template "postgresql.fullname" . }}:5432/{{ .Values.postgresql.postgresDatabase }}?sslmode=disable
- name: HTTPS_ALLOW_TERMINATION_FROM
value: 0.0.0.0/0
Copy link

@pbarker pbarker May 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This as well should come from Values.yaml

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you referring to the HTTPS_ALLOW_TERMINATION_FROM environment?

If you're referring to the postgres URL, this was one thing I was pretty unsure about. the requirements.yaml file seems to imply that a new postgres server will spin up for hydra regardless if there's one there already. Not sure how that works.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I was just referring to the HTTPS_ALLOW_TERMINATION_FROM. We are currently dynamically setting that at runtime. Thanks!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I exclude that environment variable then?

Copy link

@pbarker pbarker May 12, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would just have the value set in values.yaml with a default of 0.0.0.0/0

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah gotya. Will do.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry this took me so long. I'll push it now.

- name: hydra
image: {{ .Values.image }}:{{ .Values.imageTag }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
command: ["hydra", "host", "--dangerous-auto-logon"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove --dangerous-auto-logon and add sql migrations beforehand /go/bin/hydra migrate sql $DATABASE_URL;

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you propose we do the sql migrations beforehand?

We can run 2 commands by doing

/bin/sh -c '/go/bin/hydra migrade sql $DATABASE_URL && /go/bin/hydra host'

but I'm not a huge fan of that solution.

A better solution in my opinion would be init containers.

https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

so I'll try to set that up.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either would work, personally I'm a fan of the first solution, init containers can be tricky to debug

@kminehart
Copy link
Author

Thanks for reviewing. I'll get those changes in asap @grillz

@pbarker
Copy link

pbarker commented May 11, 2017

Awesome, thanks for doing this @kminehart beat me too it 😉

@kminehart
Copy link
Author

Anything else you see, @grillz?

name: {{ template "fullname" . }}-secret
key: system.secret
- name: DATABASE_URL
value: postgres://{{ .Values.postgresql.postgresUser }}:{{ .Values.postgresql.postgresPassword }}@{{ template "postgresql.fullname" . }}:5432/{{ .Values.postgresql.postgresDatabase }}?sslmode=disable
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use secrets for passwords. Also, is there any other way to specify the password rather than in the URL?

Copy link

@pbarker pbarker Jun 1, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@unguiculus I'm not sure using secrets for passwords should be a requirement, secrets in kubernetes are still not secure and wont be for some time

Copy link

@meyerzinn meyerzinn Jun 1, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@grillz if K8s is properly configured (TLS, don't let just anybody access the REST API) I don't see why a secret is any less secure than the deployment file.

If an attacker gains access to the k8s API, the secret is compromised either way, no?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@20zinnm For one this is a major compliance issue kubernetes/kubernetes#12742 . Not to mention the partitioning problem they are working are attempting to solve at the moment, but its not going to happen in the near future. Secrets just really aren't secret in k8s, and that no secret in the community. Sorry no pun intended 😉

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regardless, we should still put the password in a secret, especially now that RBAC is a thing, you can give someone access to deployments, and not secrets, and they won't be able to grab the database password.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can its just not actually secret, and RBAC is very limited in its ability to secure secrets at the moment. You can join sig-auth every other Wednesday where this is a hot topic.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we get around this by throwing these credentials into the environment from a secret? I.e. values.yaml --> secret --> environment?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That doesn't help much, our current setup is just using helm install --set my.secret=foo and pushing those to env vars. If we really want to use secrets its not the end of the world, they will be secure at some point here in the next year I imagine 😄

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's go ahead and set ourselves up for success :). I'll add a commit to kminehart/charts#1 with it as soon as I test it.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess... kubernetes is a funny exercise in branding, call something a secret even if it isn't at all and people will use it as such ¯_(ツ)_/¯

metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "fullname" . }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use {{ template "name" . }} for the app label. Apply everywhere.

replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ template "fullname" . }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update selector for the label change:

app: {{ template "name" . }}
release: "{{ .Release.Name }}"

Apply everywhere.


# Prerequesites

* Kubernetes 1.4+ with Beta APIs enabled
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason you still want to support 1.4?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No particular reason. 1.4 + Beta APIs is the minimum requirement, though. Honestly, I'd prefer to just remove this line anyways.

@@ -0,0 +1,69 @@
apiVersion: extensions/v1beta1
kind: Deployment
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it make more sense to use a Statefulset since you are using persistent storage?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@unguiculus Statefulset is still beta, not sure we should be pushing it yet

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if anything @kminehart I would consider removing postgres entirely from here, most people still aren't putting db's on k8s for a variety of reasons. Best method I have seen so far is the operator method, but still I think Hydra should just deploy agnostic of postgres, or use the postgres chart as a dependency

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@grillz Postgres chart as a dependency sounds good. It's probably best to have Hydra's chart set up its own database one way or the other because if left to their own devices, inexperienced system admins will likely configure Hydra to use the same database as the application, which is a major security hole. Maintaining database isolation is very important here, so it should be emphasized one way or another.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@20zinnm yeah I agree with that, but at the same time most of the k8s deployments were seeing right now are utilizing RDS or some hosted datastore, so I would hate to marry it either. Kind of a rock and a hard place

Copy link
Author

@kminehart kminehart Jun 2, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. We've seen before that beta Kubernetes resources are fairly stable, I have no problem with making Hydra a StatefulSet. I don't fully understand what the benefit of using StatefulSets would be in this case, though.

If I understand correctly, the only thing you really need persistent storage for is the SSL certificates and CA files created by Hydra, right?

  1. As for Postgres, it is currently a dependency, as @20zinnm said. I followed in Gitlab's example, which may not have been the best idea considering Gitlab has the "omnibus" methodology of installing their software, but they also have a hell of a lot more than just Postgres as requirements.

There's a solution to this problem in Helm already.

By default, we will install the bundled postgres. In requirements.yaml, I'll add a condition: postgres.bundled, and if that value is true in values.yaml, then the bundled postgres will be installed.

This way, users that will prefer using RDS can opt-in to using RDS, and those of us that don't care or prefer keeping everything in Kubernetes can keep it the way it is.

It makes sense to me to make external database connections opt-in because it's always going to require configuration anyways. Keeping it bundled means the application will work out of the box without as much configuration.

https://github.com/kubernetes/helm/blob/master/docs/charts.md#tags-and-condition-fields-in-requirementsyaml

Bear with me, I'm still fairly new to Helm and Hydra.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think that should work, Hydra should not be a statefulset, only postgres would be, but that should be a separate chart as mentioned.

@meyerzinn
Copy link

So is there a plan to get this in soon? Very excited! @kminehart

@kminehart
Copy link
Author

kminehart commented Jun 2, 2017

@20zinnm I'm working on the requested changes right now. Expect the requested changes to be applied within the next few days.

@meyerzinn
Copy link

Thanks so much @kminehart

######################################################################
{{- end }}

Hydra should not be facing the internet itself, so you can access hydra via.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Part of Hydra's authorization flow requires a redirect to Hydra. It definitely does need to be publicly accessible.

Overview of the OAuth2 Authorization Code flow
Basically, the user is initially redirected by the client to Hydra, which then redirects to the consent app with a signed challenge token. The consent app does necessary authentication and consent before redirecting the user back to Hydra (passing a signed challenge response). Hydra validates the response and generates the tokens before redirecting the user back to the client with the proper tokens.

This is incredibly simplified, but the key thing is that the user does need to be able to access Hydra from outside of the kubernetes cluster.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotya. Sorry about that! Part of the reason for me making this helm chart was so that I could set up Hydra on our cluster and give it a test run :P

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No worries. Let me know if I can help :) I too need to set up Hydra. Figured I'd go big or go home.

@meyerzinn
Copy link

meyerzinn commented Jun 3, 2017

@kminehart I made the requested changes and submitted a PR to your fork. If you merge that, the only thing left to decide on is the database credentials (secrets vs values).

@kminehart
Copy link
Author

Thanks @20zinnm.

I'll make the final changes within the next hour or so.

@meyerzinn
Copy link

I took it upon myself to make a helm chart based on your work that incorporates idiomatic Kubernetes structures, based on the chart for GitLab. I have confirmed that it works locally with minikube.

hydra.zip

(That's not the packaged version but rather a ZIP of the directory, you must unzip it and then package it.)

@kminehart
Copy link
Author

@20zinnm I like that a lot more. This was sort of a quick work solution that turned into a pull request, so the quality of the config layout and the readme are not ideal.

Do you want to create a separate PR and we can just close this one?

@@ -37,6 +40,13 @@ config:
idTokenLifespan: "1h"
authorizeCodeLifespan: "10m"
httpsAllowTerminationFrom: 0.0.0.0/0
postgresql:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hydra supports multiple backing databases. This locks you in with PostgreSQL.

I would do "databaseURL" instead.

@meyerzinn
Copy link

I'd be happy to. You will be listed as a contributor for your work.

@meyerzinn meyerzinn mentioned this pull request Jun 5, 2017
@kminehart kminehart closed this Jun 5, 2017
@aeneasr
Copy link

aeneasr commented Jul 18, 2019

In case this was missed, we have now official helm charts: https://k8s.ory.sh/helm/

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants