-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Webhooks setup and server #9
Conversation
|
||
} | ||
|
||
func (h *Handler) handle(ctx context.Context, ingress *networkingv1.Ingress) (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks to be pulling the status from the Ingress to create entries in the DNS Record? I'm not sure the webhook is the best place for this? I think a lot of the logic in here should be instead run from the pkg/controllers/traffic/traffic_controller.go
?
I have done a similar thing by calling out the traffic_controller logic from the syncer, in my syncer PR, see here: https://github.com/Kuadrant/multi-cluster-traffic-controller/pull/11/files#diff-cf3ce1c5bad0247b8bcf2c5d5eb789a53b4f752bc1684449472eca358dda2658R68
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The same too for setting the TLS certificate and maybe even the managed host? It all will still ultimately be a response from the webhook, but by moving the logic into the traffic_controller we should get the same logic when reconciling the ingress (i.e. not just for the webhook)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that we want to have that logic in both places (the reconciler and the webhook). It is a bit redundant for now but I think the webhook will have more complex logic later on when we implement custom domain verification.
Maybe @maleck13 can confirm that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So domain verification, host / dns record registration can be duplicated, but I would go for having the actual updating of the DNSRecord happen outside of the webhook if you follow?
Webhook can validate the ingress and create the initial DNSRecord and TLS certificte to register a new host and mutate the ingress to add tls section
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my suggestion is rather: let's move the "duplicated" code to the traffic_controller, and call that from the webhook, effectively removing the duplication, anything that is webhook-specific should not be moved and instead stay here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or in other words:
webhook receives ingress
-> webhook passes ingress to traffic_controller
-> webhook modifies returned ingress with webhook specific logic
-> output ingress on webhook response.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
traffic controller right now modifiies the ingress directly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I don't think that makes this incompatible? It doesn't send it to the API, just updates it in memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's still a few differences between the logic in the reconciler and the logic in the webhook. My understanding is that the webhook:
- Only creates (but never updates) the DNSRecords and Certificates for the Ingress
- Doesn't copy the TLS secret once it's created by cert-manager. It only sets the name in the TLS section of the Ingress
We could still probably refactor it to reduce the duplicated logic to a minimum but I'd leave that for another time I think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, we can address this later.
9e570e4
to
7f0c720
Compare
Some cmds I used to get the controller running locally: # Patch imagepullpolicy
kubectl -n multi-cluster-traffic-controller-system patch deployment mctc-controller-manager -p '{"spec": {"template": {"spec":{"containers":[{"name":"manager","imagePullPolicy":"Never"}]}}}}' -n multi-cluster-traffic-controller-system
# Patch envFrom
kubectl -n multi-cluster-traffic-controller-system patch deployment mctc-controller-manager -p '{"spec": {"template": {"spec": {"containers": [{"name":"manager", "envFrom": [{"configMapRef":{"name":"controller-config"}}, {"secretRef":{"name":"aws-credentials"}}] }] }}}}'
# Create Configmap from file
kubectl -n multi-cluster-traffic-controller-system create configmap controller-config $(for envvar in $(cat controller-config.env); do echo -n "--from-literal=$envvar ";done)
# Create Secret from file
kubectl -n multi-cluster-traffic-controller-system create secret generic aws-credentials $(for envvar in $(cat aws-credentials.env); do echo -n "--from-literal=$envvar ";done) @sergioifg94 I'm getting an admission error when trying to create an ingress
The ingress yaml: apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
argocd.argoproj.io/instance: mctc-control-plane-echo-app
name: echo
namespace: echo-app
spec:
rules:
- host: ''
http:
paths:
- backend:
service:
name: echo
port:
number: 80
path: /
pathType: Prefix The mctc webhook ingress that got created looks like this: kubectl get ingress mctc-ingress-mctc -n multi-cluster-traffic-controller-system -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"mctc-component":"webhook"},"name":"mctc-ingress-mctc","namespace":"multi-cluster-traffic-controller-system"},"spec":{"rules":[{"host":"","http":{"paths":[{"backend":{"service":{"name":"mctc-webhooks","port":{"number":8082}}},"path":"/","pathType":"Prefix"}]}}]}}
mctc-component: webhook
creationTimestamp: "2023-01-26T15:42:08Z"
generation: 1
name: mctc-ingress-mctc
namespace: multi-cluster-traffic-controller-system
resourceVersion: "2235"
uid: 7fb786bc-1816-4e3e-b6f6-1bf8cd45bd3e
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: mctc-webhooks
port:
number: 8082
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- ip: 172.18.0.2 It looks like nginx is not allowing the echo ingress as it doesn't have a host and sees it as a wildcard? |
@david-martin That's right, I forgot to update the verification steps after a recent change. The reason for that is that I initially didn't use the empty host for the webhooks ingress, and the Nginx webhook validates that the empty host can only be used once. I think for now you can use any different host name for the echo ingress |
5a4e979
to
c42f39f
Compare
@@ -107,7 +111,7 @@ func main() { | |||
setupLog.Error(err, "unable to create controller", "controller", "DNSRecord") | |||
os.Exit(1) | |||
} | |||
dnsService := dns.NewService(mgr.GetClient(), dns.NewDefaultHostResolver(), defaultCtrlNS) | |||
dnsService := dns.NewService(mgr.GetClient(), dns.NewSafeHostResolver(dns.NewDefaultHostResolver()), defaultCtrlNS) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case anyone wonders. The reason to wrap it with the SafeHostResolver is because it'll be shared both by the Controller and the Webhook server so it avoids concurrency issues
@philbrookes @maleck13 I think we can land this now, if you can ack again. @sergioifg94 Have you captured all followups?
|
@david-martin Thank you, I pushed as a new commit the changes for the webhook creation for each new cluster. I think it all makes more sense now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Latest change verified locally with 2 workload clusters.
The webhook is created on all clusters:
2023-01-31T10:27:56Z INFO new cluster added {"name": "mctc-workload-1"}
2023-01-31T10:27:56Z INFO getting webhook configurations
2023-01-31T10:27:56Z INFO create/update validating webhooks
2023-01-31T10:27:56Z INFO create/update mutating webhooks
2023-01-31T10:27:56Z INFO new cluster added {"name": "mctc-workload-2"}
2023-01-31T10:27:56Z INFO getting webhook configurations
2023-01-31T10:27:56Z INFO create/update validating webhooks
2023-01-31T10:27:56Z INFO create/update mutating webhooks
2023-01-31T10:27:56Z INFO new cluster added {"name": "mctc-control-plane"}
2023-01-31T10:27:56Z INFO getting webhook configurations
2023-01-31T10:27:56Z INFO create/update validating webhooks
2023-01-31T10:27:56Z INFO create/update mutating webhooks
Webhook config reconciliation Co-Authored-By: Name sergioifg94 Co-Authored-By: Name R-Lawton
19871c5
to
cf4b2db
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rebased
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
Add the resources and logic to set up a webhook server running alongside the controller, as well as the reconciliation of the webhook configuration
Verification steps
In order to verify these changes, the controller must run in-cluster and be exposed via an Ingress.
caClient
field is setlevel=info msg="WEBHOOK HANDLER"
, and the update should be performed correctlykubectl get ingress echo -n default -o yaml