New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: multi hydra #35
Conversation
…r to hydra API calls
Marking as WIP because I am currently unable to actually make the additional properties actually optional. Researching that and the possibility of another CRD that would act as a pointer to a hydra admin So the |
9408966
to
13514e9
Compare
I saw that you're using |
Would it make sense to have two parts, one
|
Yeah, originally it was to try to make it optional before I realized I just needed to add
I would think that the "main" parameters should be directly on the resource, with the optional hydra definition being in it's own sub-object, though if you'd prefer it that way I can make that change |
I thought about this a bit more - wouldn't it make more sense to have this as part of the metadata? https://github.com/ory/hydra-maester/pull/35/files#diff-fdcfdc28245e27ebac197159ab164c03R13-R15 In that case, I could immediately see (without checking the resource itself) to which instance this client applies? I think that would be more "kubernetes" native as we would control the behavior of the controller using metadata, while the data for the controller using the contents |
That makes sense, although you can't validate the metadata the same way you can the properties in the CRD definition. If you think that's alright, I'll make that change, but currently the implementation allows the values to be validated as it created. |
So I reached out on the kubernetes slack and it was recommended to me to to put everything in the spec if possible, and only use annotations to add things where changing the spec would cause incompatibility or if the spec cannot handle the data for whatever reason.
Given that the new fields are optional, it's not really breaking the API at all, so I think it should be in the spec rather than in the annotations. However, if you feel strongly about this I will implement them as annotations instead. |
As noted in #30, I've changed the API version back to |
Unless you have anything left to be done this is LGTM! Let me know and I'll request a review from Tomasz too. |
I believe this is ready to merge. I'd still need to update the docs and version numbers in ory/k8s#87 but I'll get that finished up when this gets a tagged release and a published image |
@Tomasz-Smelcerz-SAP or @piotrmsc could you give a quick thumbs up/down for this? I believe you guys will be impacted by this the most atm. |
@paulbdavis First of all, that's awesome that you are contributing so actively ;-) 🎉 Let's start with the first one: do I understand you correctly that the intention of this PR is to allow supporting multiple instances of hydra by specifying on CR level where(to which hydra instance) it should be saved to? |
@piotrmsc That's correct. The use case I have for this is there is a single maester running but there is an instance of hydra for each instance of the stack (dev, staging, multiple in prod) |
I think this makes ton of sense by the way! |
Yeah, it does make sense :) However, we have on a roadmap synchro mode for all hydra instances deployed in the given namespace #4 . Basically we want to fetch kubernetes endpoint for given hydra service and for all it's instances(IPs) synchronize clients so that all hydra instances for given namespace will have the same clients. Currently, if you do not have hydra with DB persistence HPA on hydra will result in empty client store for new instances. Your proposal will not be a blocker for that but will require more handling in the code. I'm thinking out loud: maybe we should support CRD cluster wide and namespace scoped but this would be one or another apporach. So Cluster wide OR namespace wide depending on the provided configuration. With such apporach namespace isolation would be enabled. CRD namespace scope will require for sure a POC first to try out if we could have the same codebase, IMO yes. Until then we can go with your approach. Question to you @paulbdavis do you have HPA for hydra-maester ? Because if you have multiple dev envs (dev,testing,stage, production) This means you already produce a lot of CRs per yout env/namespace |
I can take a look at the code in a couple of hours, I'm at the conference currently. |
I too think that this can be handled on spec-basis (e.g. by introducing
I'm not too K8S expert but I think that maybe something like label-matching would give more control here!
I'll wait for your review then because I have very little experience with K8S Controllers :) But first, enjoy your conference! Going a bit off-topic:
The problem with this approach is that Clients aren't the only thing that needs synchronization - everything from JSON Web Keys (for signing ID Tokens) to Access/Refresh/..Token-chains needs synchronization too or you'll hit random 403 errors depending on what instance you end up at. While some of these resources can be imported (e.g. JSON Web Keys, OAuth2 Clients), some resources can - by design - not be imported - namely any token-related information. |
I'm currently not running any HPA on hydra or hydra-maester. I would agree that the synchronization would be ideal so that it can be run without database setup, but this solution will allow a multi-hydra setup with db backing in the meantime |
I scanned over the code again and everything looks good to me. Thank you for your contribution! |
@piotrmsc as per @aeneasr comment above, a sync of just the clients to all Hydra instances when using an in memory database isn't sufficient. However, there is a valid use-case for such sync, which is multi-tenancy, where you have one (or more all connected to the same DB) hydra instance per tenant (for isolation purposes), but each tenant needs to have the same set of oauth clients available. |
That would also work with the in-memory adapter for as long as you (a) are ok with tokens being lost on reboot and (b) don't have multiple hydra instances running |
Related to #76 and ory/hydra-maester#35 This separates the hydra-maester chart to allow it to be deployed separately so that a single hydra-maester instance can manage several hydra instances in a cluster. This incorporates changes from #77 for the configuration of the (now default) hydra admin instance. This reqires the changes in ory/hydra-maester#35 to be merged and tagged (temporary images that I have build are in the chart as of the opening of this PR) The hydra chart still lists the hydra-maester chart as a dependency that is deployed be default. To use the new behavior, maester.enabled must be set to false.
Related to #76 and ory/hydra-maester#35 This separates the hydra-maester chart to allow it to be deployed separately so that a single hydra-maester instance can manage several hydra instances in a cluster. This incorporates changes from #77 for the configuration of the (now default) hydra admin instance. This reqires the changes in ory/hydra-maester#35 to be merged and tagged (temporary images that I have build are in the chart as of the opening of this PR) The hydra chart still lists the hydra-maester chart as a dependency that is deployed be default. To use the new behavior, maester.enabled must be set to false.
Adds support for using one instance of hydra-maester to add clients to multiple instances of hydra
adds four new optional definitions to the CRD that correspond to flags to override from the default
hydraAdmin.url
--hydra-url
hydraAdmin.port
--hydra-port
hydraAdmin.endpoint
--endpoint
hydraAdmin.forwardedProto
--forwarded-proto
Any missing values from the defined client will use the defaults that were provided on the command line
Incorporates changes from #30 and #33
EDIT: The API version moved to
v1alpha3
because the changes were built on top of the proposed version change in #30,v1alpha2
was preserved for if that is all that gets added