-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes Mode #1319
Comments
I generally like the idea. One centerpiece will be to figure out how we can store access/refresh tokens in a scalable way - I'm not sure CRDs are the right place for that. Another idea I had was to import specific data on boot from disk. I'm not sure if that's possible, but it would allow to mount CRD/whatever on disk and hydra would load that on boot. This would not work with watching though. The problem with hydra really is that clients are usually created from 3rd parties without access to the k8s cluster. I'm not sure how much optimizing we should do towards k8s in that regard. The same goes for consent/login flows which are always browser-initiated. The only thing that's really super static is usually the JWKs. If we're talking Oathkeeper or Keto, there it really makes sense because rules etc. are rather static and there a watcher for CRDs is really smart imo. |
I agree with that. CRDs are particularly good for objects / resources which humans have to modify / create manually. There's a pretty good section here about that: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#should-i-add-a-custom-resource-to-my-kubernetes-cluster
I actually didn't think about that use-case, but it definitely makes sense, and I imagine that would be more common than the one I'm about to discuss. In a previous scenario I've run into, we had a fairly large list of authorized clients that were all internally maintained, but changed infrequently. CRDs would have been ideal for us then. It definitely makes a lot of sense for Oathkeeper and Keto. I built something similar for Ladon a while ago which would watch for |
Yeah that makes sense, Hydra is definitely being used in places where only 1st party clients are being deployed. Maybe we could watch CRDs (could this be a small library?) and simply upsert the existing datastore on changes/inserts? |
Makes sense to me. We'd also have to remember to watch for deleted resources as well. It could be a small(ish) library, but will have a big dependency on https://github.com/kubernetes/client-go. |
Ah yeah, that definitely looks like a complex dependency, maybe there is a lightweight version of that for CRDs? |
My gut says no, probably not. If you have time you might want to take a look at the sample controller: https://github.com/kubernetes/sample-controller A common pattern I've frequently seen is a separate controller server. A |
Oh yeah, that makes much more sense! |
Is there any progress on this one? |
Closing, see maester projects! |
Is your feature request related to a problem? Please describe.
I've been considering something like this for a while. It hopefully solves a couple of issues in the Kubernetes and "cloud native" space.
Right now, when using Hydra in a Kubernetes environment, the process of deploying Hydra to Kubernetes has a few issues (like any app with state):
kubectl exec
orcurl
to create / authorize clients.Hydra naturally introduces some level of state when being deployed, which is often not suitable for a Kubernetes environment.
Describe the solution you'd like
-kubernetes
(or setDATABASE_URL=kubernetes
)When
-kubernetes
is enabled, Hydra would essentially become a Kubernetes controller:HydraClients
, or other declarative API objects.kubectl create ...
, which would persist restarts of Hydra.kubectl
Additional context
There would need to be more discussion on things like:
consent/migrations
.gnatsd
server, or https://github.com/hashicorp/memberlistThe text was updated successfully, but these errors were encountered: