Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kong] add support for controller-only deployments #136

Merged
merged 8 commits into from
May 28, 2020

Conversation

rainest
Copy link
Contributor

@rainest rainest commented May 14, 2020

What this PR does / why we need it:

Add a new enabled setting to values.yaml. When set to true, Kong and associated resources are rendered. When set to false, they are not.

This allows deploying the controller alone, which has been requested often.

Special notes for your reviewer:

CCed Satwant as he asked about this recently.

I'm not thrilled about the naming, and feel like we should eventually move the Kong-specific top-level configuration under a kong block. That's breaking, however, so we can't do it in the near future.

Thoughts on whether there's some better name we can use for this?

Checklist

[Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.]

  • PR is based off the current tip of the next branch and targets next, not master
  • New or modified sections of values.yaml are documented in the README.md
  • Title of the PR and commit headers start with chart name (e.g. [kong])

Copy link

@satwant17 satwant17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should highlight/comment kong_url and publish_service properties in the value file like we do for the admin token
kong_url: http://kong-admin.kong.svc:80 admin api url in the cluster
publish_service: kong/kong-proxy **proxy service **

Covered in the standalone readme section

@hbagdi
Copy link
Member

hbagdi commented May 15, 2020

This seems like a hack and a bad one. Such global conditional flags usually cause more trouble in the long run and provide a terrible user-experience.
Considering #135 will have to implemented in future, we should stick to granular enabled settings.

Reading from the diff, I see:

  • We need to tweak the deployment, this is a major one and we need to solve this
  • Each Ingress needs to be enabled independently, it seems like this is already the case
  • Each Service needs to be enabled independently, it seems like this is already the case

After reading through the values.yaml file, I figured out why you did this. It seems like everything specific to Kong is being governed at the root level and hence adding an enabled at the root level makes sense.

I don't think adding a root level kong is the ideal solution in the long run. It is bound to confuse more and doesn't have much gains.

What instead we can do is think of this change in terms of change in behavior of the single deployment resource. So what if we do: introduce a property to tweak deployment behavior: deployment.kong.enabled, this property only toggles the kong container off, and that's it. In future, the annotations related to the deployment resource can be moved under it. We could also allow users to not inject the wait for db init container if they explicitly want to disable it, add sidecar containers to this deployment and keep them off by default. I understand this is a change in how we usually think about "enabled". Thoughts?

@rainest
Copy link
Contributor Author

rainest commented May 20, 2020

I agree that the lack of structure is bad, but it's the lack of structure we already have. There are a lot of things scattered around the root level that apply to the Deployment that could be better gathered together like so:

deployment:
  enabled: true
  annotations: {}
  replicas: 2
  pods:
    tolerations: []
    labels: {}
    annotations {}
    kong:
      enabled: true
    ingressController:
      enabled: true

However, I don't think we should move to that new style piecemeal. Controlling the Kong container via .Values.deployment and the controller via ingressController.enabled is confusing: like resources should have similar config.

I think having the standalone deployment block is good, since there are things attached to it that aren't at the container level, but still think we should have a kong section: there are currently several top-level settings that apply to the Kong container rather than the deployment (e.g. readinessProbe) and both Kong and the controller containers have enough settings that it makes sense to break them out rather than nest them under the deployment block. I'm ambivalent on whether to place their enable flag goes under their own block or under deployment as shown above.

This satisfies #135 by disabling both along with the existing "this isn't Helm 2" flag:

$ helm template --include-crds=true iggy --set ingressController.installCRDs=false --set enabled=false --set ingressController.enabled=false /tmp/symkong | grep kind:                                      
kind: CustomResourceDefinition
    kind: KongConsumer
kind: CustomResourceDefinition
    kind: KongCredential
kind: CustomResourceDefinition
    kind: KongPlugin
kind: CustomResourceDefinition
    kind: KongClusterPlugin
kind: CustomResourceDefinition
    kind: KongIngress
kind: CustomResourceDefinition
    kind: TCPIngress
        kind:
    kind: ""

This isn't perfectly exhaustive--there are things like pdb.yaml that I haven't explicitly conditioned on the Deployment, but it should catch all the defaults. We can make it explicit if desired.

That's in kind of a weird place as far as configuration. It uses the existing flag for Helm 2, even though it's nested under the controller settings and makes the if condition a mess (whatever, we probably won't need to touch it again and Helm 2 will eventually go away). For Helm 3 I don't think we can really add a dedicated "install CRDs" toggle--it doesn't appear that you can place template directives in CRDs at all. Something like this renders nothing:

{{- if .Values.enabled }}
# enabled
{{- else }}
# disabled
{{- end }}

So I don't think there's any way to handle CRD-only with the standard crds/ handling outside having a combination of settings that disables all other templates. However, it looks like we will want to subchart them regardless:

There is not support at this time for upgrading or deleting CRDs using Helm.

https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#some-caveats-and-explanations

I believe that adds new additions, but won't update, for instance, KongPlugin to include configFrom.

@hbagdi
Copy link
Member

hbagdi commented May 20, 2020

I'm ambivalent on whether to place their enable flag goes under their own block or under deployment as shown above.

Although confusing, I'd recommend doing a deployment.kong.enabled.

So I don't think there's any way to handle CRD-only with the standard crds/ handling outside having a combination of settings that disables all other templates.

Let's do the disabling all the templates part. Any risks with that?

charts/kong/README.md Outdated Show resolved Hide resolved
Add a new top-level `enabled` setting that controls whether the chart
deploys Kong and its associated resources. Setting this to `false`
allows users to create controller-only deployments.
Reconcile docs with upstream changes, fix outdated example, and add
comment for complex truth table.
@rainest
Copy link
Contributor Author

rainest commented May 28, 2020

FP for the release. b6723aa has changes for the comments.

@rainest rainest merged commit c704d30 into next May 28, 2020
@rainest rainest deleted the feat/controller-only branch May 28, 2020 17:15
rainest added a commit that referenced this pull request Jun 2, 2020
Add a new top-level `deployment.kong.enabled` setting that controls
whether the chart deploys Kong and its associated resources. Setting
this to `false` allows users to create controller-only deployments.

Allow for CRD-only deployments by setting the above to `false`,
`ingressController.enabled` to `false`, and
`ingressController.installCRDs` to `true`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants